Spelling suggestions: "subject:"multiview"" "subject:"multi_view""
1 |
Multi-View Imaging of Drosophila EmbryosGroh, Paul January 2008 (has links)
There are several reasons for imaging a single, developing embryo from multiple view points. The embryo is a complex biomechanical system and morphogenesis movements in one region typically
produce motions in adjacent areas. Multi-view imaging can be used to observe morphogenesis and gain a better understanding of normal and abnormal embryo development. The system would allow
the embryo to be rotated to a specific vantage point so that a particular morphogenetic process may be
observed clearly. Moreover, a multi-view system can be used to gather images to create an accurate three-dimensional reconstruction of the embryo for computer simulations. The scope of this thesis
was to construct an apparatus that could capture multi-view images for these applications.
A multi-view system for imaging live Drosophila melanogaster embryos, the first of its kind, is presented. Embryos for imaging are collected from genetically modified Drosophila stocks that contain a green fluorescing protein (GFP), which highlights only specific cell components. The
embryos are mounted on a wire that is rotated under computer control to desired viewpoints in front of the objective of a custom-built confocal microscope. The optical components for the orizontallyaligned
microscope were researched, selected and installed specifically for this multi-viewing
apparatus.
The multiple images of the stacks from each viewpoint are deconvolved and collaged so as to show all of the cells visible from that view. The process of rotating and capturing images can be repeated for many angles over the course of one hour. Experiments were conducted to verify the repeatability
of the rotation mechanism and to determine the number of image slices required to produce a satisfactory image collage from each viewpoint.
Additional testing was conducted to establish that the system could capture a complete 360° view of the embryo, and a time-lapse study was done to verify that a developing embryo could be imaged
repeatedly from two separate angles during ventral furrow formation. An analysis of the effects of the imaging system on embryos in terms of photo-bleaching and viability is presented.
|
2 |
Multi-View Imaging of Drosophila EmbryosGroh, Paul January 2008 (has links)
There are several reasons for imaging a single, developing embryo from multiple view points. The embryo is a complex biomechanical system and morphogenesis movements in one region typically
produce motions in adjacent areas. Multi-view imaging can be used to observe morphogenesis and gain a better understanding of normal and abnormal embryo development. The system would allow
the embryo to be rotated to a specific vantage point so that a particular morphogenetic process may be
observed clearly. Moreover, a multi-view system can be used to gather images to create an accurate three-dimensional reconstruction of the embryo for computer simulations. The scope of this thesis
was to construct an apparatus that could capture multi-view images for these applications.
A multi-view system for imaging live Drosophila melanogaster embryos, the first of its kind, is presented. Embryos for imaging are collected from genetically modified Drosophila stocks that contain a green fluorescing protein (GFP), which highlights only specific cell components. The
embryos are mounted on a wire that is rotated under computer control to desired viewpoints in front of the objective of a custom-built confocal microscope. The optical components for the orizontallyaligned
microscope were researched, selected and installed specifically for this multi-viewing
apparatus.
The multiple images of the stacks from each viewpoint are deconvolved and collaged so as to show all of the cells visible from that view. The process of rotating and capturing images can be repeated for many angles over the course of one hour. Experiments were conducted to verify the repeatability
of the rotation mechanism and to determine the number of image slices required to produce a satisfactory image collage from each viewpoint.
Additional testing was conducted to establish that the system could capture a complete 360° view of the embryo, and a time-lapse study was done to verify that a developing embryo could be imaged
repeatedly from two separate angles during ventral furrow formation. An analysis of the effects of the imaging system on embryos in terms of photo-bleaching and viability is presented.
|
3 |
Multi-view machine learning for integration of brain imaging and (epi)genomics dataJanuary 2021 (has links)
archives@tulane.edu / 1 / Yuntong Bai
|
4 |
Real-Time View-Interpolation System for Super Multi-View 3D DisplayHONDA, Toshio, FUJII, Toshiaki, HAMAGUCHI, Tadahiko 01 January 2003 (has links)
No description available.
|
5 |
On surrogate supervision multi-view learningJin, Gaole 03 December 2012 (has links)
Data can be represented in multiple views. Traditional multi-view learning methods (i.e., co-training, multi-task learning) focus on improving learning performance using information from the auxiliary view, although information from the target view is sufficient for learning task. However, this work addresses a semi-supervised case of multi-view learning, the surrogate supervision multi-view learning, where labels are available on limited views and a classifier is obtained on the target view where labels are missing. In surrogate multi-view learning, one cannot obtain a classifier without information from the auxiliary view. To solve this challenging problem, we propose discriminative and generative approaches. / Graduation date: 2013
|
6 |
Omnidirectional High Dynamic Range Imaging with a Moving CameraZhou, Fanping January 2014 (has links)
Common cameras with a dynamic range of two orders cannot reproduce typical outdoor scenes with a radiance range of over five orders. Most high dynamic range (HDR) imaging techniques reconstruct the whole dynamic range from exposure bracketed low dynamic range (LDR) images. But the camera must be kept steady with no or small motion, which is not practical in many cases. Thus, we develop a more efficient framework for omnidirectional HDR imaging with a moving camera.
The proposed framework is composed of three major stages: geometric calibration and rotational alignment, multi-view stereo correspondence and HDR composition. First, camera poses are determined and omnidirectional images are rotationally aligned. Second, the aligned images are fed into a spherical vision toolkit to find disparity maps. Third, enhanced disparity maps are used to warp differently exposed neighboring images to a target view and an HDR radiance map is obtained by fusing the registered images in radiance. We develop disparity-based forward and backward image warping algorithms for spherical stereo vision and implement them in GPU. We also explore some techniques for disparity map enhancement including a superpixel technique and a color model for outdoor scenes.
We examine different factors such as exposure increment step size, sequence ordering, and the baseline between views. We demonstrate the success with indoor and outdoor scenes and compare our results with two state-of-the-art HDR imaging methods. The proposed HDR framework allows us to capture HDR radiance maps, disparity maps and an omnidirectional field of view, which has many applications such as HDR view synthesis and virtual navigation.
|
7 |
3D Video Capture of a Moving Object in a Wide Area Using Active Cameras / 能動カメラ群を用いた広域移動対象の3次元ビデオ撮影Yamaguchi, Tatsuhisa 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第17919号 / 情博第501号 / 新制||情||89(附属図書館) / 30739 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 松山 隆司, 教授 美濃 導彦, 教授 中村 裕一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
8 |
3D Face Reconstruction from a Front Image by Pose Extension in Latent SpaceZhang, Zhao 27 September 2023 (has links)
Numerous techniques for 3D face reconstruction from a single image exist, making use of large facial databases. However, they commonly encounter quality issues due to the absence of information from alternate perspectives. For example, 3D reconstruction with a single front view input data has limited realism, particularly for profile views. We have observed that multiple-view 3D face reconstruction yields higher-quality models compared to single-view reconstruction. Based on this observation, we propose a novel pipeline that combines several deep-learning methods to enhance the quality of reconstruction from a single frontal view.
Our method requires only a single image (front view) as input, yet it generates multiple realistic facial viewpoints using various deep-learning networks. These viewpoints are utilized to create a 3D facial model, significantly enhancing the 3D face quality. Traditional image-space editing has limitations in manipulating content and styles while preserving high quality. However, editing in the latent space, which is the space after encoding or before decoding in a neural network, offers greater capabilities for manipulating a given photo.
Motivated by the ability of neural networks to generate 2D images from an extensive database and recognizing that multi-view 3D face reconstruction outperforms single-view approaches, we propose a new pipeline. This pipeline involves latent space manipulation by first finding a latent vector corresponding to a given image using the Generative Adversarial Network (GAN) inversion method. We then search for nearby latent vectors to synthesize multiple pose images from the provided input image, aiming to enhance 3D face reconstruction.
The generated images are then fed into Diffusion models, another image synthesis network, to generate their respective profile views. The Diffusion model is known for producing more realistic large-angle variations of a given object than GAN models do. Subsequently, all these images (multi-view images) are fed into an Autoencoder, a neural network designed for 3D face model predictions, to derive the 3D structure of the face. Finally, the texture of the 3D face model is combined to enhance its realism, and certain areas of the 3D shape are refined to correct any unrealistic aspects.
Our experimental results validate the effectiveness and efficiency of our method in reconstructing highly accurate 3D models of human faces from a single input (front view input) image. The reconstructed models retain high visual fidelity to the original image, even without the need for a 3D database.
|
9 |
Curious Travellers: Using web-scraped and crowd-sourced imagery in support of heritage under threatWilson, Andrew S., Gaffney, Vincent, Gaffney, Christopher F., Ch'ng, E., Bates, R., Ichumbaki, E.B., Sears, G., Sparrow, Thomas, Murgatroyd, Andrew, Faber, Edward, Evans, Adrian A., Coningham, R. 19 August 2022 (has links)
Yes / Designed as a pragmatic approach that anticipates change to cultural heritage, this chapter discusses responses that encompass records for tangible cultural heritage (monuments, sites and landscapes) and the narratives that see the impact upon them. The Curious Travellers project provides a mechanism for digitally documenting heritage sites that have been destroyed or are under immediate threat from unsympathetic development, neglect, natural disasters, conflict and cultural vandalism. The project created and tested data-mining and crowd-sourced workflows that enable the accurate digital documentation and 3D visualisation of buildings, archaeological sites, monuments and heritage at risk. When combined with donated content, image data are used to recreate 3D models of endangered and lost monuments and heritage sites using a combination of open-source and proprietary methods. These models are queried against contextual information, helping to place and interrogate structures with relevant site and landscape data for the surrounding environment. Geospatial records such as aerial imagery and 3D mobile mapping laser scan data serve as a framework for adding new content and testing accuracy. In preserving time-event records, image metadata offers important information on visitor habits and conservation pressures, which can be used to inform measures for site management. / The Curious Travellers project was funded as a component of the AHRC Digital Transformations Theme Large Grant ‘Fragmented Heritage’ (AH/L00688X/1). AHRC Follow-on funding has seen this approach contribute to the BReaTHe project (AH/S005951/1) which seeks to Build Resilience Through Heritage for displaced communities and with a contribution to the BA Cities and Infrastructures Scheme project, ‘Reducing Disaster Risk to Life and Livelihoods by evaluating the seismic performance of retrofitted interventions within Kathmandu’s UNESCO World Heritage Site during the 2015 Earthquake’, with Durham University (KF1\100109).
|
10 |
Structure from Motion with Unstructured RGBD DataSvensson, Niclas January 2021 (has links)
This thesis covers the topic of depth- assisted Structure from Motion (SfM). When performing classic SfM, the goal is to reconstruct a 3D scene using only a set of unstructured RGB images. What is attempted to be achieved in this thesis is adding the depth dimension to the problem formulation, and consequently create a system that can receive a set of RGBD images. The problem has been addressed by modifying an already existing SfM pipeline and in particular, its Bundle Adjustment (BA) process. Comparisons between the modified framework and the baseline framework resulted in conclusions regarding the impact of the modifications. The results show mainly two things. First of all, the accuracy of the framework is increased in most situations. The difference is the most significant when the captured scene only is covered from a small sector. However, noisy data can cause the modified pipeline to decrease in performance. Secondly, the run time of the framework is significantly reduced. A discussion of how to modify other parts of the pipeline is covered in the conclusion of the report. / Följande examensarbete behandlar ämnet djupassisterad Struktur genom Rörelse (eng. SfM). Vid klassisk SfM är målet att återskapa en 3D scen, endast med hjälp av en sekvens av oordnade RGB bilder. I djupassiterad SfM adderas djupinformationen till problemformulering och följaktligen har ett system som kan motta RGBD bilder skapats. Problemet har lösts genom att modifiera en befintlig SfM- mjukvara och mer specifikt dess Buntjustering (eng. BA). Resultatet från den modifierade mjukvaran jämförs med resultatet av originalutgåvan för att dra slutsatser rådande modifikationens påverkan på prestandan. Resultaten visar huvudsakligen två saker. Först och främst, den modifierade mjukvaran producerar resultat med högre noggrannhet i de allra flesta fall. Skillnaden är som allra störst när bilderna är tagna från endast en liten sektor som omringar scenen. Data med brus kan dock försämra systemets prestanda aningen jämfört med orginalsystemet. För det andra, så minskar exekutionstiden betydligt. Slutligen diskuteras hur mjukvaran kan vidareutvecklas för att ytterligare förbättra resultaten.
|
Page generated in 0.0412 seconds