Spelling suggestions: "subject:"imagebased"" "subject:"image.based""
111 |
Precisiones sobre el levantamiento 3D integrado con herramientas avanzadas, aplicado al conocimiento y la conservación del patrimonio arquitectónicoMartínez-Espejo Zaragoza, Isabel 16 May 2014 (has links)
The aim of the thesis is to analyse new technologies for integrated architectural surveys,
studying the advantages and limitations of each in different architectural contexts, providing a
global vision and unifying terminology and methodology in the field of architecture and
engineering. The new technologies analyzed include laser scanning (both time-of-flight and
triangulation), image-based 3-D modelling and drone-based photogrammetry, along with their
integration with classical surveying techniques.
With this goal, some case studies were examined, using different survey techniques with
several advanced applications, in the field of architectural heritage. The case studies enabled us
to analyze and study these techniques, however having quite clear that Image- and Range-based
Modelling techniques, rather than compared, must be analysed for their integration, which is
essential for the rendering of models with high levels of morphological and chromatic detail.
On the other hand, thanks to the experience of the two different faculties (Architecture in
Valencia, Spain and Civil Engineering in Pisa, Italy), besides the issues of interpretation
between the two languages, divergence was found between the terminology used by the
different specialists involved in the process, be they engineers (although dealing with different
branches), architects and archaeologists. It is obvious that each of these profiles has a different
view of architectural heritage, general construction and surveys. The current trend to form
multidisciplinary teams working on architectural heritage, leads us to conclude that an unified
technical terminology in this field could facilitate understanding and integration between the
different figures, thus creating a common code. / Martínez-Espejo Zaragoza, I. (2014). Precisiones sobre el levantamiento 3D integrado con herramientas avanzadas, aplicado al conocimiento y la conservación del patrimonio arquitectónico [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37512
|
112 |
Solving continuous reaction-diffusion models in image-based complex geometriesStark, Justina 06 November 2024 (has links)
Porous media, including soil, catalysts, rocks, and organic tissue, are ubiquitous in nature, acting as complex environments through which heat, ions, and chemicals travel. Diffusion, often coupled to interfacial reactions, constitutes a fundamental transport process in porous media. It plays an important role in the transport of fertilizer and contaminants in soil, heat conduction in insulators, and natural phenomena such as geological rock transformations and biological signaling and patterning. This thesis aims to enable a deeper understanding of reaction-diffusion processes in porous media by developing a flexible and computationally efficient numerical modeling and simulation workflow.
Numerical modeling is required whenever the problem is too complex for mechanistic insight by quantitative experiments or analytical theory. Reaction-diffusion processes in porous media are such a complex problem, as transport is coupled to the intricate pore geometry. In addition, they involve different scales, from microscale tortuous diffusion pathways and local reactions to macroscale gradients, requiring models that resolve multiple scales.
Multiscale modeling is, however, challenging due to its large memory requirement and computational cost. In addition, realistic porous media geometries, as can be derived from microscopy images or µCTs, are not parametrizable, requiring algorithmic representation.
We address these issues by developing a scalable, multi-GPU accelerated numerical simulation pipeline that enables memory-efficient multiscale modeling of reaction-diffusion processes in realistic, image-based geometries. This pipeline takes volumetric images as input, from which it derives implicit geometry representations using the level-set method. The diffusion domain is discretized in a geometry-adapted, memory-efficient way using distributed sparse block grids. Reaction-diffusion PDEs are solved in the strong form using the finite difference method with scalable multi-GPU acceleration, enabling the simulation in large, highly resolved 3D samples.
We demonstrate the versatility of the present pipeline by simulating reaction-diffusion processes in the image-derived 3D geometries of four applications: fertilizer diffusion in soil, heat conduction with surface dissipation in reticulate porous ceramics, fluid-mediated mineral replacement in rocks, and morphogen gradient formation in the extracellular space of a gastrulating zebrafish embryo. The former two are used to benchmark the performance of our pipeline, whereas the latter two address real-world problems from geology and biology, respectively.
The geological problem considers a process called dolomitization, which converts calcite into dolomite. Determining the geophysical characteristics of the earth's most abundant rocks, dolomitization plays an important role in engineering and geology. Predicting dolomitization is hampered by the extreme scales involved, as mountain-scale dolomite is produced by ion-scale reactions over millions of years. Using the presented pipeline, we derive rock geometries from µCTs and simulate dolomitization as an inhomogeneous reaction-diffusion process with moving reaction fronts and phase-dependent diffusion. The simulation results show that reaction and diffusion are not sufficient to explain the reaction-front roughness observed experimentally, implying that other processes, such as advection, porosity fingering, or sub-resolution geometric features, such as microcracks in the rock, play an important role in dolomitization.
The biological problem, which constitutes the main application of this thesis, is the formation of morphogen gradients during embryonic development. This is a particularly complex problem influenced by several factors, such as dynamically changing tissue geometries, localized sources and sinks, and interaction with molecules of the extracellular matrix (e.g., HSPG). The abundance of factors involved and the coupling between them makes it difficult to quantify how they modulate the gradient individually and collectively.
We use the present pipeline to reconstruct realistic extracellular space (ECS) geometries of a zebrafish embryo from a light-sheet microscopy video. In these geometries, we simulate the gradient formation of the morphogen Fgf8a, showing for the first time in realistic embryo geometries that a source-diffusion-degradation mechanism with HSPG binding is sufficient for the spontaneous formation and maintenance of robust long-range morphogen gradients. We further test gradient sensitivity against different source, sink, and HSPG-binding rates and show that the gradient becomes distorted when ECS volume or connectivity in the model changes, demonstrating the importance of considering realistic embryo geometries.
In summary, this thesis shows that modeling highly resolved, realistic 3D geometries is computationally feasible using geometry-adapted sparse grids, achieving an 18-fold reduction in memory requirements for the zebrafish model compared to a dense-grid implementation. Multi-CPU/GPU acceleration enables pore-scale simulation of large systems. The pipeline developed in this thesis is fully open-source and versatile, as demonstrated by its application to different kinds of porous media, and we anticipate its future application to other reaction-diffusion problems in porous media, in particular from biology.
|
113 |
Fast and Scalable Structure-from-Motion for High-precision Mobile Augmented Reality SystemsBae, Hyojoon 24 April 2014 (has links)
A key problem in mobile computing is providing people access to necessary cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that address this key problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects's imagery. As a consequence, many mobile augmented reality approaches have been proposed to identify and visualize relevant cyber-information on users' mobile devices by intelligently interpreting users' positions and orientations in 3D and their associated surroundings. However, existing approaches for mobile augmented reality primarily rely on Radio Frequency (RF) based location tracking technologies (e.g., Global Positioning Systems or Wireless Local Area Networks), which typically do not provide sufficient precision in RF-denied areas or require additional hardware and custom mobile devices.
To remove the dependency on external location tracking technologies, this dissertation presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user's position. Rather, the user's 3D location and orientation are automatically and purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected photographs.
A further challenge of mobile augmented reality is creating 3D cyber-information and associating it with real-world physical objects, especially using the limited 2D user interfaces in standard mobile devices. To address this challenge, this research provides a new image-based 3D cyber-physical content authoring method designed specifically for the limited screen sizes and capabilities of commodity mobile devices. This new approach does not only provide a method for creating 3D cyber-information with standard mobile devices, but also provides an automatic association of user-driven cyber-information with real-world physical objects in 3D.
Finally, a key challenge of scalability for mobile augmented reality is addressed in this dissertation. In general, mobile augmented reality is required to work regardless of users' location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing fast direct 2D-to-3D matching algorithms for localization, as well as applying caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users' location, size of physical objects, and number of cyber-physical information items.
To realize all of these research objectives, five research methods are developed and validated: 1) Hybrid 4-Dimensional Augmented Reality (HD4AR), 2) Plane transformation based 3D cyber-physical content authoring from a single 2D image, 3) Cached k-d tree generation for fast direct 2D-to-3D matching, 4) double-stage matching algorithm with a single indexed k-d tree, and 5) K-means Clustering of 3D physical models with geo-information. After discussing each solution with technical details, the perceived benefits and limitations of the research are discussed with validation results. / Ph. D.
|
114 |
Free View Rendering for 3D Video : Edge-Aided Rendering and Depth-Based Image InpaintingMuddala, Suryanarayana Murthy January 2015 (has links)
Three Dimensional Video (3DV) has become increasingly popular with the success of 3D cinema. Moreover, emerging display technology offers an immersive experience to the viewer without the necessity of any visual aids such as 3D glasses. 3DV applications, Three Dimensional Television (3DTV) and Free Viewpoint Television (FTV) are auspicious technologies for living room environments by providing immersive experience and look around facilities. In order to provide such an experience, these technologies require a number of camera views captured from different viewpoints. However, the capture and transmission of the required number of views is not a feasible solution, and thus view rendering is employed as an efficient solution to produce the necessary number of views. Depth-image-based rendering (DIBR) is a commonly used rendering method. Although DIBR is a simple approach that can produce the desired number of views, inherent artifacts are major issues in the view rendering. Despite much effort to tackle the rendering artifacts over the years, rendered views still contain visible artifacts. This dissertation addresses three problems in order to improve 3DV quality: 1) How to improve the rendered view quality using a direct approach without dealing each artifact specifically. 2) How to handle disocclusions (a.k.a. holes) in the rendered views in a visually plausible manner using inpainting. 3) How to reduce spatial inconsistencies in the rendered view. The first problem is tackled by an edge-aided rendering method that uses a direct approach with one-dimensional interpolation, which is applicable when the virtual camera distance is small. The second problem is addressed by using a depth-based inpainting method in the virtual view, which reconstructs the missing texture with background data at the disocclusions. The third problem is undertaken by a rendering method that firstly inpaint occlusions as a layered depth image (LDI) in the original view, and then renders a spatially consistent virtual view. Objective assessments of proposed methods show improvements over the state-of-the-art rendering methods. Visual inspection shows slight improvements for intermediate views rendered from multiview videos-plus-depth, and the proposed methods outperforms other view rendering methods in the case of rendering from single view video-plus-depth. Results confirm that the proposed methods are capable of reducing rendering artifacts and producing spatially consistent virtual views. In conclusion, the view rendering methods proposed in this dissertation can support the production of high quality virtual views based on a limited number of input views. When used to create a multi-scopic presentation, the outcome of this dissertation can benefit 3DV technologies to improve the immersive experience.
|
115 |
Contributions à l’acquisition, à la modélisation et à l’augmentation d’environnements complexes / Contributions to acquisition, modelling and augmented rendering of complex environmentsFouquet, François 10 December 2012 (has links)
De nos jours, les images augmentées font partie du quotidien. Du cinéma aux jeux vidéo en passant par l'architecture ou le design, nombreuses sont les applications qui ont besoin d'afficher des objets synthétiques dans un contexte réel. Cependant, le processus permettant d'intégrer ces objets de manière cohérente dans leur environnement peut rapidement devenir très difficile à mettre en œuvre. Lorsque l'environnement à augmenter est de grande taille ou présente une géométrie ou un éclairage complexe, sa modélisation devient alors fastidieuse et l'utilisation de ces modèles dans le rendu d'images augmentées réalistes est très coûteuse en ressources. D'un autre côté, des applications telles que la réalité augmentée ont besoin de méthodes de rendu efficaces pour fonctionner en temps réel. Elles doivent, par ailleurs, pouvoir s'adapter automatiquement à des environnements a priori inconnus avec pour seule source d'informations les images acquises progressivement dans ces derniers. Dans cette thèse, nous nous sommes appuyés sur les méthodes développées en vision par ordinateur, en modélisation à partir d'images et en synthèse d'images pour proposer une approche globale au problème d'augmentation cohérente d'environnements complexes et progressivement découverts. Nous y développons de nouvelles méthodes d'acquisition permettant d'obtenir des images RGB+Z avec une grande dynamique et localisées dans l'environnement. Nous présentons ensuite comment exploiter cette source d'information pour construire incrémentalement des représentations de la géométrie et de l'éclairement de la scène à augmenter. Enfin, nous apportons de nouvelles approches de rendu adaptées à ces modélisations et permettant une génération rapide d'images augmentées où l'éclairement des objets synthétiques reste cohérent avec celui de l'environnement / Today, augmented images are parts of our daily life. From movie industry to video games through architecture and object design, many applications need to display synthetic objects into a real context. However, coherently integrating objects in their environment may be a difficult task. When the environment is vast or includes complex geometry or lighting, its modelling is tedious and using its model to render augmented images is resource-consuming. Moreover, applications like augmented reality need efficient real-time rendering. They also have to automatically adapt to unmodelled environments, while progressively acquiring data from incoming images. In this thesis, we based our work on computer vision, image-based modelling and rendering methods to propose a global approach to the problem of progressively discovered and complex environment coherent augmentation. We first develop new acquisition methods to get high dynamic range RGB+Z registered images of the environment. Then we explain how to use these informations to incrementally build models of scene geometry and lighting. Finally, we provide new rendering approaches using these models and suitable for an efficient and photometrically coherent image augmentation
|
116 |
Generisanje prostora na osnovu perspektivnih slika i primena u oblasti graditeljskog nasleđa / Modeling Based on Perspective Images and Application in Cultural HeritageStojaković Vesna 16 August 2011 (has links)
<p>U ovom radu kreiran je novi poluautomatski normativni sistem za generisanje prostora na osnovu perspektivnih slika. Sistem obuhvata niz postupaka čijim korišćenjem se na osnovu dvodimenzionalnih medijuma, najčešće fotografija, generiše trodimenzionalna struktura. Pristup je prilagođen rešavanju složenih problema iz oblasti vizuelizacije graditeljskog nasleđa, što je u radu potkrepljeno praktičnom primenom sistema.</p> / <p> In this research a new semi-automated normative image-based modelling system is created. The system includes number of procedures that are used to transform two-dimensional medium, such as photographs, to threedimensional structure. The used approach is adjusted to the properties of complex projects in the domain of visualization of cultural heritage. An application of the system is given demonstrating its practical value.</p>
|
117 |
Physical and computational models of the gloss exhibited by the human hair tress : a study of conventional and novel approaches to the gloss evaluation of human hairRizvi, Syed January 2013 (has links)
The evaluation of the gloss of human hair, following wet/dry chemical treatments such as bleaching, dyeing and perming, has received much scientific and commercial attention. Current gloss analysis techniques use constrained viewing conditions where the hair tresses are observed under directional lighting, within a calibrated presentation environment. The hair tresses are classified by applying computational models of the fibres' physical and optical attributes and evaluated by either a panel of human observers, or the computational modelling of gloss intensity distributions processed from captured digital images. The most popular technique used in industry for automatically assessing hair gloss is to digitally capture images of the hair tresses and produce a classification based upon the average gloss intensity distribution. Unfortunately, the results from current computational modelling techniques are often found to be inconsistent when compared to the panel discriminations of human observers. In order to develop a Gloss Evaluation System that produces the same judgements as those produced from both computational models and human psychophysical panel assessments, the human visual system has to be considered. An image based Gloss Evaluation System with gonio-capture capability has been developed, characterised and tested. A new interpretation of the interaction between reflection bands has been identified on the hair tress images and a novel method was developed to segment the diffuse, chroma and specular regions from the image of the hair tress. A new model has been developed, based on Hunter's contrast gloss approach, to quantify the gloss of the human hair tress. Furthermore, a large number of hair tresses have been treated with a range of hair products to simulate different levels of hair shine. The Tresses have been treated with different commercial products. To conduct a psychophysical experiment, one-dimensional scaling paired comparison test, a MATLAB GUI (Graphical user interface) was developed to display images of the hair tress on calibrated screen. Participants were asked to select the image that demonstrated the greatest gloss. To understand what users were attending to and how they used the different reflection bands in their quantification of the gloss of the human hair tress, the GUI was run on an Eye-Tracking System. The results of several gloss evaluation models were compared with the participants' choices from the psychophysical experiment. The novel gloss assessment models developed during this research correlated more closely with the participants' choices and were more sensitive to changes in gloss than the conventional models used in the study.
|
118 |
[en] RECONSTRUCTION OF SCENES FROM IMAGES BY COARSE-TO-FINE SPACE CARVING / [pt] RECONSTRUÇÃO DE CENAS A PARTIR DE IMAGENS ATRAVÉS DE ESCULTURA DO ESPAÇO POR REFINAMENTO ADAPTATIVOANSELMO ANTUNES MONTENEGRO 03 March 2004 (has links)
[pt] A reconstrução de cenas a partir de imagens tem recebido,
recentemente, grande interesse por parte dos
pesquisadores das áreas de visão computacional, computação
gráfica e modelagem geométrica. Várias são as suas
aplicações como, por exemplo, modelagem de objetos a partir
de imagens, construção de ambientes virtuais e
telepresença. Dentre os métodos que têm produzido bons
resultados na reconstrução de cenas a partir de imagens,
podemos destacar aqueles que se baseiam em algoritmos de
Escultura do Espaço. Tais técnicas procuram determinar
quais são os elementos, em uma representação volumétrica do
espaço da cena, que satisfazem um conjunto de restrições
fotométricas impostas por um conjunto de imagens. Uma vez
determinados, tais elementos volumétricos são coloridos de
modo que reproduzam as informações fotométricas nas imagens
de entrada, com uma certa margem de tolerância especificada
com base em critérios estatísticos. Neste trabalho,
investigamos o emprego de técnicas utilizadas em
visualização no desenvolvimento de métodos de escultura do
espaço. Como resultado, propomos um método por refinamento
adaptativo que trabalha sobre espaços de reconstrução
representados através de subdivisões espaciais. Tal método
é capaz de realizar o processo de reconstrução de modo mais
eficiente, empregando esforços proporcionais às
características locais da cena, que são descobertas à
medida em que a reconstrução é realizada. Finalmente,
avaliamos a qualidade e a eficiência do método proposto,
com base em um conjunto de resultados obtidos através de um
sistema de reconstrução de objetos que utiliza imagens
capturadas por webcams. / [en] The reconstruction of scenes from imagens has received
special attention from researchers of the areas of computer
vision, computer graphics and geometric modeling. As
examples of application we can mention image-based
scene reconstruction, modeling of complex as-built objects,
construction of virtual environments and telepresence.
Among the most successful methods used for the
reconstruction of scenes from images are those based on
Space Carving algorithms. These techniques reconstruct the
shape of the objects of interest in a scene by determining,
in a volumetric representation of the scene space, those
elements that satisfy a set of photometric constraints
imposed by the input images. Once determined, each photo-
consistent element is colorized according to the
photometric information in the input images, in such a way
that they reproduce the photometric information in the
input images, within some pre-specificied error tolerance.
In this work, we investigate the use of rendering
techniques in space carving methods. As a result, we
propose a method based on an adaptive refinement process
which works on reconstruction spaces represented by
spatial subdivisions. We claim that such method can
reconstruct the objects of interest in a more efficient
way, using resources proportional to the local
characteristics of the scene, which are discovered as the
reconstruction takes place. Finally, we evaluate the
quality and the efficiency of the method based on
the results obtained from a reconstruction device that
works with images captured from webcams.
|
119 |
High Dynamic Range Panoramic Imaging with Scene MotionSilk, Simon 17 November 2011 (has links)
Real-world radiance values can range over eight orders of magnitude from starlight to direct sunlight but few digital cameras capture more than three orders in a single Low Dynamic Range (LDR) image. We approach this problem using established High Dynamic Range (HDR) techniques in which multiple images are captured with different exposure times so that all portions of the scene are correctly exposed at least once. These images are then combined to create an HDR image capturing the full range of the scene. HDR capture introduces new challenges; movement in the scene creates faded copies of moving objects, referred to as ghosts.
Many techniques have been introduced to handle ghosting, but typically they either address specific types of ghosting, or are computationally very expensive. We address ghosting by first detecting moving objects, then reducing their contribution to the final composite on a frame-by-frame basis. The detection of motion is addressed by performing change detection on exposure-normalized images. Additional special cases are developed based on a priori knowledge of the changing exposures; for example, if exposure is increasing every shot, then any decrease in intensity in the LDR images is a strong indicator of motion. Recent Superpixel over-segmentation techniques are used to refine the detection. We also propose a novel solution for areas that see motion throughout the capture, such as foliage blowing in the wind. Such areas are detected as always moving, and are replaced with information from a single input image, and the replacement of corrupted regions can be tailored to the scenario.
We present our approach in the context of a panoramic tele-presence system. Tele-presence systems allow a user to experience a remote environment, aiming to create a realistic sense of "being there" and such a system should therefore provide a high quality visual rendition of the environment. Furthermore, panoramas, by virtue of capturing a greater proportion of a real-world scene, are often exposed to a greater dynamic range than standard photographs. Both facets of this system therefore stand to benefit from HDR imaging techniques.
We demonstrate the success of our approach on multiple challenging ghosting scenarios, and compare our results with state-of-the-art methods previously proposed. We also demonstrate computational savings over these methods.
|
120 |
Variable-aperture PhotographyHasinoff, Samuel William 19 January 2009 (has links)
While modern digital cameras incorporate sophisticated engineering, in terms of their core functionality, cameras have changed remarkably little in more than a hundred years. In particular, from a given viewpoint, conventional photography essentially remains limited to manipulating a basic set of controls: exposure time, focus setting, and aperture setting.
In this dissertation we present three new methods in this domain, each based on capturing multiple photos with different camera settings. In each case, we show how defocus can be exploited to achieve different goals, extending what is possible with conventional photography. These methods are closely connected, in that all rely on analyzing changes in aperture.
First, we present a 3D reconstruction method especially suited for scenes with high geometric complexity, for which obtaining a detailed model is difficult using previous approaches. We show that by controlling both the focus and aperture setting, it is possible compute depth for each pixel independently. To achieve this, we introduce the "confocal constancy" property, which states that as aperture setting varies, the pixel intensity of an in-focus scene point will vary in a scene-independent way that can be predicted by prior calibration.
Second, we describe a method for synthesizing photos with adjusted camera settings in post-capture, to achieve changes in exposure, focus setting, etc. from very few input photos. To do this, we capture photos with varying aperture and other settings fixed, then recover the underlying scene representation best reproducing the input. The key to the approach is our layered formulation, which handles occlusion effects but is tractable to invert. This method works with the built-in "aperture bracketing" mode found on most digital cameras.
Finally, we develop a "light-efficient" method for capturing an in-focus photograph in the shortest time, or with the highest quality for a given time budget. While the standard approach involves reducing the aperture until the desired region is in-focus, we show that by "spanning" the region with multiple large-aperture photos,we can reduce the total capture time and generate the in-focus photo synthetically. Beyond more efficient capture, our method provides 3D shape at no additional cost.
|
Page generated in 0.0321 seconds