Spelling suggestions: "subject:"imagebased"" "subject:"image.based""
1 |
Image-based 3D Model ConstructionChen, Kuan-Chen 25 July 2001 (has links)
The shape construction of three-dimensional objects has numerous applications in area that include manufacturing, virtual simulation, science, medicine, and
consumer marketing. In this thesis, we consider a automatic system which captures and triangulates views of a real world 3D objects and finally registers and integrates
them.
There are four steps in our system, image acquisition, image processing, model construction and stereoscopic display. First step, image acquisition, we take 2D image pairs (by CCD camera movement) from different angles of the model with one CCD camera. Second step, image processing. In order to derive depth form two images captured by the CCD camera, we find registration points between two images by
using image segmentation, feature extraction, image registration.
Third step, 3D model construction, we divide it into three parts. First part, we generate partial depth surface by Delaunay triangle splitting , and for a selected set of viewing directions. Second part of this step, different surfaces have to be mapped into a uniform coordinate system for the given 3D object. Integration of registered surfaces defines the third part of the model construction. This finally, can lead to the
generation of a complete 3D model of the given scene or of the given object.
After generating a complete 3D model, we create a stereoscopic view in last step. We put on the LC-Shutter-Glasses and look through the lenses at high-resolution full color display while the lenses "shutter" on and off alternatively. The monitor displays only the left view while the right lens of the glasses shutters, and display the right
view while the left lens of the glasses shutters.
|
2 |
Image-Based RelightingHuang, Jingyuan 19 April 2010 (has links)
This thesis proposes a method for changing the lighting in some types of images. The method requires only a single input image, either a studio photograph or a synthetic image, consisting of several simple objects placed on a uniformly coloured background. Based on 2D information (contours, shadows, specular areas) extracted from the input image, the method reconstructs a 3D model of the original lighting and and 2.5D models of objects in the image. It then modifies the appearance of shading and shadows to achieve relighting. It can produce visually satisfactory results without a full 3D description of the scene geometry, and requires minimal user assistance.
While developing this method, the importance of different cues for understanding 3D geometry, such as contours or shadows, were considered. Constraints like symmetry that help determine surface shapes were also explored. The method has potential application in improving the appearance of existing photographs. It can also be used in image compositing to achieve consistent lighting.
|
3 |
Image-Based RelightingHuang, Jingyuan 19 April 2010 (has links)
This thesis proposes a method for changing the lighting in some types of images. The method requires only a single input image, either a studio photograph or a synthetic image, consisting of several simple objects placed on a uniformly coloured background. Based on 2D information (contours, shadows, specular areas) extracted from the input image, the method reconstructs a 3D model of the original lighting and and 2.5D models of objects in the image. It then modifies the appearance of shading and shadows to achieve relighting. It can produce visually satisfactory results without a full 3D description of the scene geometry, and requires minimal user assistance.
While developing this method, the importance of different cues for understanding 3D geometry, such as contours or shadows, were considered. Constraints like symmetry that help determine surface shapes were also explored. The method has potential application in improving the appearance of existing photographs. It can also be used in image compositing to achieve consistent lighting.
|
4 |
Image-Based View SynthesisAvidan, Shai, Evgeniou, Theodoros, Shashua, Amnon, Poggio, Tomaso 01 January 1997 (has links)
We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.
|
5 |
Novel Skeletal Representation for Articulated CreaturesBrostow, Gabriel Julian 12 April 2004 (has links)
This research examines an approach for capturing 3D surface and structural data of moving articulated creatures. Given the task of non-invasively and automatically capturing such data, a methodology and the associated experiments are presented, that apply to multiview videos of the subjects motion. Our thesis states: A functional structure and the timevarying surface of an articulated creature subject are contained in a sequence of its 3D data. A functional structure is one example of the possible arrangements of internal mechanisms (kinematic joints, springs, etc.) that is capable of performing the motions observed in the input data.
Volumetric structures are frequently used as shape descriptors for 3D data. The capture of such data is being facilitated by developments in multi-view video and range scanning, extending to subjects that are alive and moving. In this research, we examine vision-based modeling and the related representation of moving articulated creatures using Spines. We define a Spine as a branching axial structure representing the shape and topology of a 3D objects limbs, and capturing the limbs correspondence and motion over time.
The Spine concept builds on skeletal representations often used to describe the internal structure of an articulated object and the significant protrusions. Our representation of a Spine provides for enhancements over a 3D skeleton. These enhancements form temporally consistent limb hierarchies that contain correspondence information about real motion data. We present a practical implementation that approximates a Spines joint probability function to reconstruct Spines for synthetic and real subjects that move. In general, our approach combines the objectives of generalized cylinders, 3D scanning, and markerless motion capture to generate baseline models from real puppets, animals, and human subjects.
|
6 |
Viability of Photogrammetry for As-built Surveys without Control Points in Building Renovation ProjectsLiu, Yang 16 December 2013 (has links)
In recent years, it is becoming more and more common to utilize 3D modeling technology to reconstruct cultural heritages. The most common way to deliver the 3D model of an existing object is based on hands-on surveys and CAD tools which could be impractical for large or complex structure in term of time consumption and cost. Recently, laser scanning technology and more automated photogrammetric modeling methods become available, and making the 3D reconstruction process of real world objects easier. Photogrammetry is one of the most cost-effective approaches we could use to gather the physical information of an object, such as size, location, and appearance. Also, the operation of the equipment of photogrammetry, which is a camera, is very easy and cost-effective. However, it also has its drawback, which is mainly caused the outcome’s low accuracy level. Accurate drawings or models only have been achieved with other approaches, such as 3D laser scanning or total station.
The 3D model of the Francis Hall at Texas A&M University, which will be renovated soon, was created in order to investigate whether the image-based 3D model produced using photogrammetry technology would be acceptable or not for the use in renovation projects. For this investigation, the elapsed time for data acquisition and 3D modeling was measured. The accuracy level of the image-based 3D model and the deficiencies of this approach were also recorded. Then, the image-based 3D model of Francis Hall was presented in the BIM CAVE to four industry professionals and one graduate student. The regular 3D model of the Francis Hall, which was created, using dimensions extracted from 2D drawings, was also presented to the interviewees in the BIM CAVE. After watching two different 3D models (image-based 3D model and regular 3D model) of the same Francis Hall, five interviewees were requested to describe the differences they noticed between image-based 3D model and regular 3D model presented in the BIM CAVE.
By reviewing and analyzing the data from interviews. Following conclusions could be made. First, the image-based 3D model of Francis Hall gave people more feeling of reality than the traditional CAD drawings or BIM models. Second, the image-based 3D model could be used for saving travels, showing details, improving coordination, improving design, facilities management tool, and marketing tool. Third, in order to make it practical for the industry, the time consumption and cost of generating the image-based 3D model should be at least equivalent to time consumption and cost for architects to conduct survey and generate CAD drawings or BIM model.
|
7 |
Image-based Vehicle LocalizationWang, Dong 01 July 2019 (has links)
Localization is a crucial topic in navigation, especially in autonomous vehicles navigation. It is usually done by using a global positioning system (GPS) sensor. Even though there have been many studies of vehicle localization in recent years, most of them combine GPS sensor with other sensors to get a more accurate result [1]. In this thesis, we propose a novel image-based vehicle localization by utilizing vision sensor and computer vision techniques to extract vehicle surrounding text landmarks and to locate the vehicle position.
Firstly, we explore the feasibility of image-based vehicle localization by using text landmark of a position to locate vehicle position. A text landmark model, a location matching algorithm and a basic localization model are proposed, which allow a vehicle to find the best matching location in the database by cross-checking the text landmarks from query image and reference location images.
Secondly, we propose two more robust localization models by applying vehicle moving distance and heading direction data as part of inputs, which significantly improve the localization accuracy.
Finally, we simulate an experiment to evaluate our three different localization models and further prove the robustness of our model through experimental results. / Master of Science / In modern days, global positioning system (GPS) is the major approach to locate positions. However, GPS is not as reliable as we thought. Under some environmental situations, GPS cannot provide continuous navigation information. Besides, GPS signals can be jammed or spoofed by malicious attackers.
In this thesis, we aim to explore how to locate the vehicle’s position without using GPS sensor. Here, we propose a novel image-based vehicle localization by utilizing vision sensor and computer vision techniques to extract vehicle surrounding text landmarks and to locate the vehicle position.
Various tools and techniques are explored in the process of the research. With the explored result, we propose several localization models and simulate an experiment to prove the robustness of these models.
|
8 |
Incorporating image-based data in AADT estimation: methodology and numerical investigation of increased accuracyJiang, Zhuojun 24 August 2005 (has links)
No description available.
|
9 |
View Rendering for 3DTVMuddala, Suryanarayana Murthy January 2013 (has links)
Advancements in three dimensional (3D) technologies are rapidly increasing. Three Dimensional Television (3DTV) aims at creating 3D experience for the home user. Moreover, multiview autostereoscopic displays provide a depth impression without the requirement for any special glasses and can be viewed from multiple locations. One of the key issues in the 3DTV processing chain is the content generation from the available input data format video plus depth and multiview video plus depth. This data allows for the possibility of producing virtual views using depth-image-based rendering. Although depth-image-based rendering is an efficient method, it is known for appearance of artifacts such as cracks, corona and empty regions in rendered images. While several approaches have tackled the problem, reducing the artifacts in rendered images is still an active field of research. Two problems are addressed in this thesis in order to achieve a better 3D video quality in the context of view rendering: firstly, how to improve the quality of rendered views using a direct approach (i.e. without applying specific processing steps for each artifact), and secondly, how to fill the large missing areas in a visually plausible manner using neighbouring details from around the missing regions. This thesis introduces a new depth-image-based rendering and depth-based texture inpainting in order to address these two problems. The first problem is solved by an edge-aided rendering method that relies on the principles of forward warping and one dimensional interpolation. The other problem is addressed by using the depth-included curvature inpainting method that uses appropriate depth level texture details around disocclusions. The proposed edge-aided rendering method and depth-included curvature inpainting methods are evaluated and compared with the state-of-the-art methods. The results show an increase in the objective quality and the visual gain over reference methods. The quality gain is encouraging as the edge-aided rendering method omits the specific processing steps to remove the rendering artifacts. Moreover, the results show that large disocclusions can be effectively filled using the depth-included curvature inpainting approach. Overall, the proposed approaches improve the content generation for 3DTV and additionally, for free view point television.
|
10 |
Visualisering av brottsplatserBeck, Jonas, Brorsson Läthén, Klas January 2006 (has links)
<p>Detta arbete har gjorts i samarbete med Rikspolisstyrelsen för att ta fram en metod för hur modern medieteknik kan användas för att skapa en ”virtuell brottsplats”. Syftet är att arbetet ska leda till ett förslag till en metod som lämpar sig för att integrera i polisens brottsplatsundersökningar och rättsliga processer, med beaktande av de speciella krav som ställs.</p><p>Arbetet innehåller två huvuddelar där den första delens utgångspunkt är vad som går att göra med utrustning och teknik som redan finns tillgänglig och den andra delen hur det skulle kunna utvecklas vidare. Till första delen har ett förslag på en metod som kan användas för att utnyttja panoramatekniken, tagits fram. Därför har det också genomförts utvärderingar och tester på befintliga programvaror för att utröna vad som passar syftet bäst. För den andra delen togs en egen lösning fram och implementerades i OpenGL/C++. Denna lösning baseras på laserskanningsdata. Resultatet av denna del är inte en färdig metod som kan börja användas direkt utan mer ett exempel på hur panoramatekniken kan användas till något mer än att bara visa hur en plats ser ut. För att knyta samman projektet med verkligheten har båda dessa delar tillämpats på flera riktiga fall.</p><p>En slutsats som kan dras av arbetet är att visualiseringar av denna typ är väldigt användbara och till fördel för utredare och åklagare. Det finns mycket kvar att undersöka men det är ingen tvekan om att den här typen av teknik är användbar för detta syfte.</p>
|
Page generated in 0.0306 seconds