Spelling suggestions: "subject:"threedimensional imaging"" "subject:"three0dimensional imaging""
121 |
Calculation of the radiative lifetime and optical properties for three-dimensional (3D) hybrid perovskitesMohammad, Khaled Shehata Baiuomy January 2016 (has links)
A dissertation submitted for the fulfilment of the requirements of the degree of Master of Science
to the Faculty of Science, Witwatersrand University, Johannesburg.
June 2016. / The combination of effective numerical techniques and scientific intuition to find new and novel
types of materials is the process used in the discovery of materials for future technologies. Adding
to that, being able to calculate the radiative lifetimes of excitons, exciton properties, and the
optical properties by using efficient numerical techniques gives an estimation and identification
of the best candidate materials for a solar cell. This approach is inexpensive and stable. Present
ab initio methods based on Many-body perturbation theory and density functional theory are
capable of predicting these properties with a high enough level of accuracy for most cases.
The electronic properties calculated using GaAs as a reference system and the 3D hybird perovskite
CH3NH3PbI3 are based on density functional theory. The optical properties are investigated
by calculating the dielectric function. The theoretical framework of the radiative lifetime
of excitons and calculating the exciton properties are based on Wannier model of the exciton
and the Bethe-Salpeter equation. / MT2017
|
122 |
Clustering-based force-directed algorithms for three-dimensional graph visualizationLu, Jia Wei January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
|
123 |
Binocular tone mapping. / 雙目色調映射 / CUHK electronic theses & dissertations collection / Shuang mu se diao ying sheJanuary 2012 (has links)
隨著3D電影和遊戲的蓬勃發展,雙目(立體)顯示設備日益流行,也變得更為廉價。 立體顯示設備 引入了一個額外的圖像空間,使得用於顯示的圖像域翻倍(一個圖像域對應左眼,另一個對應右眼)。 目前的雙目(立體)顯示設備主要把這個額外的圖像空間用於顯示三維立體信息。 / 人們的雙目視覺系統不僅可以把雙眼看到的具有深度差異信息的兩個圖像融合起來,而且可以把兩個在亮度,色彩, 對比度,甚至是內容細節上有一定程度不同的圖像融合到一起,形成一個單一的視界。 這個現象叫做雙眼單視界(Binocular Single Vision)。通過一些列複雜的神經生理融合過成,人們可以通過雙眼單視界比只用任意一隻單眼 觀察到更多視覺內容和信息,其獲得的信息量也多於兩個視野的線性組合。 / 在本畢業論文中,雙眼單視界首次被應用到了計算機圖形學領域,基於這一現象,提出了一個新穎的雙目色調映射框架(Binocular Tone Mapping Framework)。對於輸入的高動態範圍(High-Dynamic Range, HDR)圖像,我們的雙目色調映射 構架將生成一組用於雙目觀看的低動態範圍(Low-Dynamic Range, LDR)圖像對,用以從原HDR圖像中保留 更多的人們可感知到的視覺內容和信息。 給定任意一個指定的色調映射方法,我們的雙目計算框架首先通過使用其默認或者 人工選擇的參數生成一張LDR圖像(不失一般性,我們指定為左視野圖),隨後,圖像對中的另一張LDR圖像 將由系統從同一HDR圖像源使用最優化算法生成。 結果的兩張LDR圖像是不相同的,它們分別保留了不同的視覺信息。通過使用雙目顯示設備,它們可以合計表現出比任一單張LDR圖像更豐富的圖像內容。 / 人們的兩個視野對圖像差異不是無限的,也存在一個容忍度。一旦超過了某個限制閾值,視覺上的不適感覺就會出現。 了避免不適 的產生,我們設計了一個全新的雙目視覺舒適預測預器(Binocular Viewing Comfort predictor)用以預測 雙目視覺的不舒適閾值。 在我們的雙目色調映射構架中,BVCP用於指導LDR圖像對的生成,同時避免觸發 任何視覺不適。 通過一些列的實驗和用戶調查,我們提出的工作框架的有效性以及BVCP預測不適閾值的準確程度都得到了驗證。 / With the booming of 3D movies and video games, binocular (stereo) display devices become more and more popular and affordable. By introducing one additional image space, stereo displays double the image domains for visualization, one for the left eye and the other for the right eye. Existing binocular display systems only utilize this dual image domain for stereopsis. / Our human binocular vision is not only able to fuse two images with disparity, but also two images with difference in luminance, contrast and even detail, into a single percept, up to a certain limit. This phenomenon is known as binocular single vision. By a complicated neurophysiologic fusion process, humans can perceive more visual content via binocular single vision than one arbitrary single view or the linear blending of two views. / In this thesis, for the first time, binocular single vision has been utilized into computer graphics. Based on this phenomenon, a novel binocular tone mapping framework is proposed. From the source high-dynamic range (HDR) image, the proposed framework generates a binoc- ular low-dynamic range (LDR) image pair that preserves more human- perceivable visual content than a single LDR image using the additional image domain. Given a tone mapping method, our framework firstly generates one tone-mapped LDR image (left, without loss of generality) by the default or user selected parameters. Then its counterpart image (right) of the LDR pair is optimally synthesized from the same source HDR image. The two LDR images are not identical, and contain different visual information. Via binocular displays, they can aggregately present more human-perceivable visual richness than a single arbitrary LDR image. / Human binocular vision has a tolerance on the difference between two views. When such limit is exceeded, binocular viewing discomfort appears. To prevent such visual discomfort, a novel binocular view- ing comfort predictor (BVCP) is also proposed to predict the comfort threshold of binocular vision. In our framework, BVCP is used to guide the generation of LDR image pair without triggering visual discomfort. Through several user studies, the effectiveness of the proposed framework in increasing human-perceivable visual richness and the pre- dictability of the proposed BVCP in predicting the binocular discomfort threshold have been demonstrated and validated. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Yang, Xuan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 108-115). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Study --- p.5 / Chapter 2.1 --- Stereo Display --- p.5 / Chapter 2.2 --- HDR Tone Mapping --- p.9 / Chapter 2.2.1 --- HDR lmage --- p.9 / Chapter 2.2.2 --- Tone Mapping --- p.11 / Chapter 3 --- Binocular Vision --- p.16 / Chapter 3.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.2 --- Motor Fusion and Sensory Fusion --- p.19 / Chapter 3.1.3 --- Fusion, Suppression and Rivalry --- p.21 / Chapter 3.1.4 --- Rivalry --- p.23 / Chapter 3.1.5 --- Fusional Theory --- p.24 / Chapter 3.1.6 --- Fusion with Stereopsis --- p.27 / Chapter 3.2 --- Binocular discomfort --- p.29 / Chapter 3.2.1 --- Fusional area --- p.31 / Chapter 3.2.2 --- Contour difference --- p.32 / Chapter 3.2.3 --- Failure of rivalry --- p.33 / Chapter 3.2.4 --- Contour and regional contrast --- p.34 / Chapter 4 --- Binocular Visual Comfort Predictor (BVCP) --- p.37 / Chapter 4.1 --- Introduction --- p.37 / Chapter 4.2 --- Design of BVCP --- p.40 / Chapter 4.2.1 --- Fusional Area --- p.40 / Chapter 4.2.2 --- Contour Fusion --- p.42 / Chapter 4.2.3 --- Failure of Rivalry --- p.48 / Chapter 4.2.4 --- Contour and Regional Contrast --- p.53 / Chapter 4.2.5 --- The Overall Fusion Predictor --- p.54 / Chapter 4.3 --- Experiments and User Study --- p.56 / Chapter 4.4 --- Discussion --- p.60 / Chapter 5 --- Binocular Tone Mapping --- p.62 / Chapter 5.1 --- Introduction --- p.62 / Chapter 5.2 --- Binocular Tone Mapping Framework --- p.66 / Chapter 5.2.1 --- System Overview --- p.66 / Chapter 5.2.2 --- Optimization --- p.68 / Chapter 5.3 --- Experiments and Results --- p.71 / Chapter 5.4 --- Userstudy --- p.77 / Chapter 5.4.1 --- Visual Richness --- p.77 / Chapter 5.4.2 --- Binocular Symmetry --- p.81 / Chapter 5.5 --- Discussion --- p.82 / Chapter 5.5.1 --- Incorporating Stereopsis --- p.82 / Chapter 5.5.2 --- Limitation --- p.84 / Chapter 5.5.3 --- Extension --- p.85 / Chapter 6 --- Conclusion --- p.91 / Chapter 6.1 --- Contribution --- p.91 / Chapter 6.2 --- Future Work --- p.92 / Chapter A --- More Results of Binocular Tone Mapping --- p.94 / Chapter B --- Test Sequence for BVCP --- p.103 / Bibliography --- p.108
|
124 |
An Inertial-Optical Tracking System for Quantitative, Freehand, 3D UltrasoundGoldsmith, Abraham Myron 16 January 2009 (has links)
Three dimensional (3D) ultrasound has become an increasingly popular medical imaging tool over the last decade. It offers significant advantages over Two Dimensional (2D) ultrasound, such as improved accuracy, the ability to display image planes that are physically impossible with 2D ultrasound, and reduced dependence on the skill of the sonographer. Among 3D medical imaging techniques, ultrasound is the only one portable enough to be used by first responders, on the battlefield, and in rural areas. There are three basic methods of acquiring 3D ultrasound images. In the first method, a 2D array transducer is used to capture a 3D volume directly, using electronic beam steering. This method is mainly used for echocardiography. In the second method, a linear array transducer is mechanically actuated, giving a slower and less expensive alternative to the 2D array. The third method uses a linear array transducer that is moved by hand. This method is known as freehand 3D ultrasound. Whether using a 2D array or a mechanically actuated linear array transducer, the position and orientation of each image is known ahead of time. This is not the case for freehand scanning. To reconstruct a 3D volume from a series of 2D ultrasound images, assumptions must be made about the position and orientation of each image, or a mechanism for detecting the position and orientation of each image must be employed. The most widely used method for freehand 3D imaging relies on the assumption that the probe moves along a straight path with constant orientation and speed. This method requires considerable skill on the part of the sonographer. Another technique uses features within the images themselves to form an estimate of each image's relative location. However, these techniques are not well accepted for diagnostic use because they are not always reliable. The final method for acquiring position and orientation information is to use a six Degree-of-Freedom (6 DoF) tracking system. Commercially available 6 DoF tracking systems use magnetic fields, ultrasonic ranging, or optical tracking to measure the position and orientation of a target. Although accurate, all of these systems have fundamental limitations in that they are relatively expensive and they all require sensors or transmitters to be placed in fixed locations to provide a fixed frame of reference. The goal of the work presented here is to create a probe tracking system for freehand 3D ultrasound that does not rely on any fixed frame of reference. This system tracks the ultrasound probe using only sensors integrated into the probe itself. The advantages of such a system are that it requires no setup before it can be used, it is more portable because no extra equipment is required, it is immune from environmental interference, and it is less expensive than external tracking systems. An ideal tracking system for freehand 3D ultrasound would track in all 6 DoF. However, current sensor technology limits this system to five. Linear transducer motion along the skin surface is tracked optically and transducer orientation is tracked using MEMS gyroscopes. An optical tracking system was developed around an optical mouse sensor to provide linear position information by tracking the skin surface. Two versions were evaluated. One included an optical fiber bundle and the other did not. The purpose of the optical fiber is to allow the system to integrate more easily into existing probes by allowing the sensor and electronics to be mounted away from the scanning end of the probe. Each version was optimized to track features on the skin surface while providing adequate Depth Of Field (DOF) to accept variation in the height of the skin surface. Orientation information is acquired using a 3 axis MEMS gyroscope. The sensor was thoroughly characterized to quantify performance in terms of accuracy and drift. This data provided a basis for estimating the achievable 3D reconstruction accuracy of the complete system. Electrical and mechanical components were designed to attach the sensor to the ultrasound probe in such a way as to simulate its being embedded in the probe itself. An embedded system was developed to perform the processing necessary to translate the sensor data into probe position and orientation estimates in real time. The system utilizes a Microblaze soft core microprocessor and a set of peripheral devices implemented in a Xilinx Spartan 3E field programmable gate array. The Xilinx Microkernel real time operating system performs essential system management tasks and provides a stable software platform for implementation of the inertial tracking algorithm. Stradwin 3D ultrasound software was used to provide a user interface and perform the actual 3D volume reconstruction. Stradwin retrieves 2D ultrasound images from the Terason t3000 portable ultrasound system and communicates with the tracking system to gather position and orientation data. The 3D reconstruction is generated and displayed on the screen of the PC in real time. Stradwin also provides essential system features such as storage and retrieval of data, 3D data interaction, reslicing, manual 3D segmentation, and volume calculation for segmented regions. The 3D reconstruction performance of the system was evaluated by freehand scanning a cylindrical inclusion in a CIRS model 044 ultrasound phantom. Five different motion profiles were used and each profile was repeated 10 times. This entire test regimen was performed twice, once with the optical tracking system using the optical fiber bundle, and once with the optical tracking system without the optical fiber bundle. 3D reconstructions were performed with and without the position and orientation data to provide a basis for comparison. Volume error and surface error were used as the performance metrics. Volume error ranged from 1.3% to 5.3% with tracking information versus 15.6% to 21.9% without for the version of the system without the optical fiber bundle. Volume error ranged from 3.7% to 7.6% with tracking information versus 8.7% to 13.7% without for the version of the system with the optical fiber bundle. Surface error ranged from 0.319 mm RMS to 0.462 mm RMS with tracking information versus 0.678 mm RMS to 1.261 mm RMS without for the version of the system without the optical fiber bundle. Surface error ranged from 0.326 mm RMS to 0.774 mm RMS with tracking information versus 0.538 mm RMS to 1.657 mm RMS without for the version of the system with the optical fiber bundle. The prototype tracking system successfully demonstrated that accurate 3D ultrasound volumes can be generated from 2D freehand data using only sensors integrated into the ultrasound probe. One serious shortcoming of this system is that it only tracks 5 of the 6 degrees of freedom required to perform complete 3D reconstructions. The optical system provides information about linear movement but because it tracks a surface, it cannot measure vertical displacement. Overcoming this limitation is the most obvious candidate for future research using this system. The overall tracking platform, meaning the embedded tracking computer and the PC software, developed and integrated in this work, is ready to take advantage of vertical displacement data, should a method be developed for sensing it.
|
125 |
3D Object Understanding from RGB-D DataFeng, Jie January 2017 (has links)
Understanding 3D objects and being able to interact with them in the physical world are essential for building intelligent computer vision systems.
It has tremendous potentials for various applications ranging from augmented reality, 3D printing to robotics.
It might seem simple for human to look and make sense of the visual world, it is however a complicated process for machines to accomplish similar tasks.
Generally, the system is involved with a series of processes: identify and segment a target object, estimate its 3D shape and predict its pose in an open scene where the target objects may have not been seen before.
Although considerable research works have been proposed to tackle these problems, they remain very challenging due to a few key issues:
1) most methods rely solely on color images for interpreting the 3D property of an object; 2) large labeled color images are expensive to get for tasks like pose estimation, limiting the ability to train powerful prediction models; 3) training data for the target object is typically required for 3D shape estimation and pose prediction, making these methods hard to scale and generalize to unseen objects.
Recently, several technological changes have created interesting opportunities for solving these fundamental vision problems.
Low-cost depth sensors become widely available that provides an additional sensory input as a depth map which is very useful for extracting 3D information of the object and scene. On the other hand, with the ease of 3D object scanning with depth sensors and open access to large scale 3D model database like 3D warehouse and ShapeNet, it is possible to leverage such data to build powerful learning models.
Third, machine learning algorithm like deep learning has become powerful that it starts to surpass state-of-the-art or even human performance on challenging tasks like object recognition. It is now feasible to learn rich information from large datasets in a single model.
The objective of this thesis is to leverage such emerging tools and data to solve the above mentioned challenging problems for understanding 3D objects with a new perspective by designing machine learning algorithms utilizing RGB-D data.
Instead of solely depending on color images, we combine both color and depth images to achieve significantly higher performance for object segmentation. We use large collection of 3D object models to provide high quality training data and retrieve visually similar 3D CAD models from low-quality captured depth images which enables knowledge transfer from database objects to target object in an observed scene.
By using content-based 3D shape retrieval, we also significantly improve pose estimation via similar proxy models without the need to create the exact 3D model as a reference.
|
126 |
3D object reconstruction from line drawings.January 2005 (has links)
Cao Liangliang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 64-69). / Abstracts in English and Chinese. / Chapter 1 --- Introduction and Related Work --- p.1 / Chapter 1.1 --- Reconstruction from Single Line Drawings and the Applications --- p.1 / Chapter 1.2 --- Optimization-based Reconstruction --- p.2 / Chapter 1.3 --- Other Reconstruction Methods --- p.2 / Chapter 1.3.1 --- Line Labeling and Algebraic Methods --- p.2 / Chapter 1.3.2 --- CAD Reconstruction --- p.3 / Chapter 1.3.3 --- Modelling from Images --- p.3 / Chapter 1.4 --- Finding Faces of Line Drawings --- p.4 / Chapter 1.5 --- Generalized Cylinder --- p.4 / Chapter 1.6 --- Research Problems and Our Contribution --- p.5 / Chapter 1.6.1 --- A New Criteria --- p.5 / Chapter 1.6.2 --- Recover Objects from Line Drawings without Hidden Lines --- p.6 / Chapter 1.6.3 --- Reconstruction of Curved Objects --- p.6 / Chapter 1.6.4 --- Planar Limbs Assumption and the Derived Models --- p.6 / Chapter 2 --- A New Criteria for Reconstruction --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Human Visual Perception and the Symmetry Measure --- p.10 / Chapter 2.3 --- Reconstruction Based on Symmetry and Planarity --- p.11 / Chapter 2.3.1 --- Finding Faces --- p.11 / Chapter 2.3.2 --- Constraint of Planarity --- p.11 / Chapter 2.3.3 --- Objective Function --- p.12 / Chapter 2.3.4 --- Reconstruction Algorithm --- p.13 / Chapter 2.4 --- Experimental Results --- p.13 / Chapter 2.5 --- Summary --- p.18 / Chapter 3 --- Line Drawings without Hidden Lines: Inference and Reconstruction --- p.19 / Chapter 3.1 --- Introduction --- p.19 / Chapter 3.2 --- Terminology --- p.20 / Chapter 3.3 --- Theoretical Inference of the Hidden Topological Structure --- p.21 / Chapter 3.3.1 --- Assumptions --- p.21 / Chapter 3.3.2 --- Finding the Degrees and Ranks --- p.22 / Chapter 3.3.3 --- Constraints for the Inference --- p.23 / Chapter 3.4 --- An Algorithm to Recover the Hidden Topological Structure --- p.25 / Chapter 3.4.1 --- Outline of the Algorithm --- p.26 / Chapter 3.4.2 --- Constructing the Initial Hidden Structure --- p.26 / Chapter 3.4.3 --- Reducing Initial Hidden Structure --- p.27 / Chapter 3.4.4 --- Selecting the Most Plausible Structure --- p.28 / Chapter 3.5 --- Reconstruction of 3D Objects --- p.29 / Chapter 3.6 --- Experimental Results --- p.32 / Chapter 3.7 --- Summary --- p.32 / Chapter 4 --- Curved Objects Reconstruction from 2D Line Drawings --- p.35 / Chapter 4.1 --- Introduction --- p.35 / Chapter 4.2 --- Related Work --- p.36 / Chapter 4.2.1 --- Face Identification --- p.36 / Chapter 4.2.2 --- 3D Reconstruction of planar objects --- p.37 / Chapter 4.3 --- Reconstruction of Curved Objects --- p.37 / Chapter 4.3.1 --- Transformation of Line Drawings --- p.37 / Chapter 4.3.2 --- Finding 3D Bezier Curves --- p.39 / Chapter 4.3.3 --- Bezier Surface Patches and Boundaries --- p.40 / Chapter 4.3.4 --- Generating Bezier Surface Patches --- p.41 / Chapter 4.4 --- Results --- p.43 / Chapter 4.5 --- Summary --- p.45 / Chapter 5 --- Planar Limbs and Degen Generalized Cylinders --- p.47 / Chapter 5.1 --- Introduction --- p.47 / Chapter 5.2 --- Planar Limbs and View Directions --- p.49 / Chapter 5.3 --- DGCs in Homogeneous Coordinates --- p.53 / Chapter 5.3.1 --- Homogeneous Coordinates --- p.53 / Chapter 5.3.2 --- Degen Surfaces --- p.54 / Chapter 5.3.3 --- DGCs --- p.54 / Chapter 5.4 --- Properties of DGCs --- p.56 / Chapter 5.5 --- Potential Applications --- p.59 / Chapter 5.5.1 --- Recovery of DGC Descriptions --- p.59 / Chapter 5.5.2 --- Deformable DGCs --- p.60 / Chapter 5.6 --- Summary --- p.61 / Chapter 6 --- Conclusion and Future Work --- p.62 / Bibliography --- p.64
|
127 |
Human computer interaction: a vision-based approach for American sign language recognition. / CUHK electronic theses & dissertations collectionJanuary 2002 (has links)
Deng Jiangwen. / "April 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 156-170). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
128 |
Robust and parallel mesh reconstruction from unoriented noisy points.January 2009 (has links)
Sheung, Hoi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 65-70). / Abstract also in Chinese. / Abstract --- p.v / Acknowledgements --- p.ix / List of Figures --- p.xiii / List of Tables --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Main Contributions --- p.3 / Chapter 1.2 --- Outline --- p.3 / Chapter 2 --- Related Work --- p.5 / Chapter 2.1 --- Volumetric reconstruction --- p.5 / Chapter 2.2 --- Combinatorial approaches --- p.6 / Chapter 2.3 --- Robust statistics in surface reconstruction --- p.6 / Chapter 2.4 --- Down-sampling of massive points --- p.7 / Chapter 2.5 --- Streaming and parallel computing --- p.7 / Chapter 3 --- Robust Normal Estimation and Point Projection --- p.9 / Chapter 3.1 --- Robust Estimator --- p.9 / Chapter 3.2 --- Mean Shift Method --- p.11 / Chapter 3.3 --- Normal Estimation and Projection --- p.11 / Chapter 3.4 --- Moving Least Squares Surfaces --- p.14 / Chapter 3.4.1 --- Step 1: local reference domain --- p.14 / Chapter 3.4.2 --- Step 2: local bivariate polynomial --- p.14 / Chapter 3.4.3 --- Simpler Implementation --- p.15 / Chapter 3.5 --- Robust Moving Least Squares by Forward Search --- p.16 / Chapter 3.6 --- Comparison with RMLS --- p.17 / Chapter 3.7 --- K-Nearest Neighborhoods --- p.18 / Chapter 3.7.1 --- Octree --- p.18 / Chapter 3.7.2 --- Kd-Tree --- p.19 / Chapter 3.7.3 --- Other Techniques --- p.19 / Chapter 3.8 --- Principal Component Analysis --- p.19 / Chapter 3.9 --- Polynomial Fitting --- p.21 / Chapter 3.10 --- Highly Parallel Implementation --- p.22 / Chapter 4 --- Error Controlled Subsampling --- p.23 / Chapter 4.1 --- Centroidal Voronoi Diagram --- p.23 / Chapter 4.2 --- Energy Function --- p.24 / Chapter 4.2.1 --- Distance Energy --- p.24 / Chapter 4.2.2 --- Shape Prior Energy --- p.24 / Chapter 4.2.3 --- Global Energy --- p.25 / Chapter 4.3 --- Lloyd´ةs Algorithm --- p.26 / Chapter 4.4 --- Clustering Optimization and Subsampling --- p.27 / Chapter 5 --- Mesh Generation --- p.29 / Chapter 5.1 --- Tight Cocone Triangulation --- p.29 / Chapter 5.2 --- Clustering Based Local Triangulation --- p.30 / Chapter 5.2.1 --- Initial Surface Reconstruction --- p.30 / Chapter 5.2.2 --- Cleaning Process --- p.32 / Chapter 5.2.3 --- Comparisons --- p.33 / Chapter 5.3 --- Computing Dual Graph --- p.34 / Chapter 6 --- Results and Discussion --- p.37 / Chapter 6.1 --- Results of Mesh Reconstruction form Noisy Point Cloud --- p.37 / Chapter 6.2 --- Results of Clustering Based Local Triangulation --- p.47 / Chapter 7 --- Conclusions --- p.55 / Chapter 7.1 --- Key Contributions --- p.55 / Chapter 7.2 --- Factors Affecting Our Algorithm --- p.55 / Chapter 7.3 --- Future Work --- p.56 / Chapter A --- Building Neighborhood Table --- p.59 / Chapter A.l --- Building Neighborhood Table in Streaming --- p.59 / Chapter B --- Publications --- p.63 / Bibliography --- p.65
|
129 |
Recovering 3D geometry from single line drawings.January 2011 (has links)
Xue, Tianfan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 52-55). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Previous Approaches on Face Identification --- p.3 / Chapter 1.1.1 --- Face Identification --- p.3 / Chapter 1.1.2 --- General Objects --- p.4 / Chapter 1.1.3 --- Manifold Objects --- p.7 / Chapter 1.2 --- Previous Approaches on 3D Reconstruction --- p.9 / Chapter 1.3 --- Our approach for Face Identification --- p.11 / Chapter 1.4 --- Our approach for 3D Reconstruction --- p.13 / Chapter 2 --- Face Detection --- p.14 / Chapter 2.1 --- GAFI and its Face Identification Results --- p.15 / Chapter 2.2 --- Our Face Identification Approach --- p.17 / Chapter 2.2.1 --- Real Face Detection --- p.18 / Chapter 2.2.2 --- The Weak Face Adjacency Theorem --- p.20 / Chapter 2.2.3 --- Searching for Type 1 Lost Faces --- p.22 / Chapter 2.2.4 --- Searching for Type 2 Lost Faces --- p.23 / Chapter 2.3 --- Experimental Results --- p.25 / Chapter 3 3 --- D Reconstruction --- p.30 / Chapter 3.1 --- Assumption and Terminology --- p.30 / Chapter 3.2 --- Finding Cuts from a Line Drawing --- p.34 / Chapter 3.2.1 --- Propositions for Finding Cuts --- p.34 / Chapter 3.2.2 --- Searching for Good Cuts --- p.35 / Chapter 3.3 --- Separation of a Line Drawing from Cuts --- p.38 / Chapter 3.4 3 --- D Reconstruction from a Line Drawing --- p.45 / Chapter 3.5 --- Experiments --- p.45 / Chapter 4 --- Conclusion --- p.50
|
130 |
Dynamic texture synthesis in image and video processing.January 2008 (has links)
Xu, Leilei. / Thesis submitted in: October 2007. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 78-84). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Texture and Dynamic Textures --- p.1 / Chapter 1.2 --- Related work --- p.4 / Chapter 1.3 --- Thesis Outline --- p.7 / Chapter 2 --- Image/Video Processing --- p.8 / Chapter 2.1 --- Bayesian Analysis --- p.8 / Chapter 2.2 --- Markov Property --- p.10 / Chapter 2.3 --- Graph Cut --- p.12 / Chapter 2.4 --- Belief Propagation --- p.13 / Chapter 2.5 --- Expectation-Maximization --- p.15 / Chapter 2.6 --- Principle Component Analysis --- p.15 / Chapter 3 --- Linear Dynamic System --- p.17 / Chapter 3.1 --- System Model --- p.18 / Chapter 3.2 --- Degeneracy and Canonical Model Realization --- p.19 / Chapter 3.3 --- Learning of Dynamic Textures --- p.19 / Chapter 3.4 --- Synthesizing Dynamic Textures --- p.21 / Chapter 3.5 --- Summary --- p.21 / Chapter 4 --- Dynamic Color Texture Synthesis --- p.25 / Chapter 4.1 --- Related Work --- p.25 / Chapter 4.2 --- System Model --- p.26 / Chapter 4.2.1 --- Laplacian Pyramid-based DCTS Model --- p.28 / Chapter 4.2.2 --- RBF-based DCTS Model --- p.28 / Chapter 4.3 --- Experimental Results --- p.32 / Chapter 4.4 --- Summary --- p.42 / Chapter 5 --- Dynamic Textures using Multi-resolution Analysis --- p.43 / Chapter 5.1 --- System Model --- p.44 / Chapter 5.2 --- Multi-resolution Descriptors --- p.46 / Chapter 5.2.1 --- Laplacian Pyramids --- p.47 / Chapter 5.2.2 --- Haar Wavelets --- p.48 / Chapter 5.2.3 --- Steerable Pyramid --- p.49 / Chapter 5.3 --- Experimental Results --- p.51 / Chapter 5.4 --- Summary --- p.55 / Chapter 6 --- Motion Transfer --- p.59 / Chapter 6.1 --- Problem formulation --- p.60 / Chapter 6.1.1 --- Similarity on Appearance --- p.61 / Chapter 6.1.2 --- Similarity on Dynamic Behavior --- p.62 / Chapter 6.1.3 --- The Objective Function --- p.65 / Chapter 6.2 --- Further Work --- p.66 / Chapter 7 --- Conclusions --- p.67 / Chapter A --- List of Publications --- p.68 / Chapter B --- Degeneracy in LDS Model --- p.70 / Chapter B.l --- Equivalence Class --- p.70 / Chapter B.2 --- The Choice of the Matrix Q --- p.70 / Chapter B.3 --- Swapping the Column of C and A --- p.71 / Chapter C --- Probability Density Functions --- p.74 / Chapter C.1 --- Probability Distribution --- p.74 / Chapter C.2 --- Joint Probability Distributions --- p.75 / Bibliography --- p.78
|
Page generated in 0.125 seconds