• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 789
  • 358
  • 98
  • 48
  • 46
  • 22
  • 14
  • 13
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 1665
  • 1665
  • 527
  • 315
  • 268
  • 257
  • 233
  • 191
  • 169
  • 156
  • 133
  • 126
  • 120
  • 111
  • 105
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Interactive volume visualization in a virtual environment.

January 1998 (has links)
by Yu-Hang Siu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 74-80). / Abstract also in Chinese. / Abstract --- p.iii / Acknowledgements --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Volume Visualization --- p.2 / Chapter 1.2 --- Virtual Environment --- p.11 / Chapter 1.3 --- Approach --- p.12 / Chapter 1.4 --- Thesis Overview --- p.13 / Chapter 2 --- Contour Extraction --- p.15 / Chapter 2.1 --- Concept of Intelligent Scissors --- p.16 / Chapter 2.2 --- Dijkstra's Algorithm --- p.18 / Chapter 2.3 --- Cost Function --- p.20 / Chapter 2.4 --- Summary --- p.23 / Chapter 3 --- Volume Cutting --- p.24 / Chapter 3.1 --- Basic idea of the algorithm --- p.25 / Chapter 3.2 --- Intelligent Scissors on Surface Mesh --- p.27 / Chapter 3.3 --- Internal Cutting Surface --- p.29 / Chapter 3.4 --- Summary --- p.34 / Chapter 4 --- Three-dimensional Intelligent Scissors --- p.35 / Chapter 4.1 --- 3D Graph Construction --- p.36 / Chapter 4.2 --- Cost Function --- p.40 / Chapter 4.3 --- Applications --- p.42 / Chapter 4.3.1 --- Surface Extraction --- p.42 / Chapter 4.3.2 --- Vessel Tracking --- p.47 / Chapter 4.4 --- Summary --- p.49 / Chapter 5 --- Implementations in a Virtual Environment --- p.52 / Chapter 5.1 --- Volume Cutting --- p.53 / Chapter 5.2 --- Surface Extraction --- p.56 / Chapter 5.3 --- Vessel Tracking --- p.59 / Chapter 5.4 --- Summary --- p.64 / Chapter 6 --- Conclusions --- p.68 / Chapter 6.1 --- Summary of Results --- p.68 / Chapter 6.2 --- Future Directions --- p.70 / Chapter A --- Performance of Dijkstra's Shortest Path Algorithm --- p.72 / Chapter B --- IsoRegion Construction --- p.73
212

Fast interactive 2D and 3D segmentation tools.

January 1998 (has links)
by Kevin Chun-Ho Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 74-79). / Abstract also in Chinese. / Chinese Abstract --- p.v / Abstract --- p.vi / Acknowledgements --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Prior Work : Image Segmentation Techniques --- p.3 / Chapter 2.1 --- Introduction to Image Segmentation --- p.4 / Chapter 2.2 --- Region Based Segmentation --- p.5 / Chapter 2.2.1 --- Boundary Based vs Region Based --- p.5 / Chapter 2.2.2 --- Region growing --- p.5 / Chapter 2.2.3 --- Integrating Region Based and Edge Detection --- p.6 / Chapter 2.2.4 --- Watershed Based Methods --- p.8 / Chapter 2.3 --- Fuzzy Set Theory in Segmentation --- p.8 / Chapter 2.3.1 --- Fuzzy Geometry Concept --- p.8 / Chapter 2.3.2 --- Fuzzy C-Means (FCM) Clustering --- p.9 / Chapter 2.4 --- Canny edge filter with contour following --- p.11 / Chapter 2.5 --- Pyramid based Fast Curve Extraction --- p.12 / Chapter 2.6 --- Curve Extraction with Multi-Resolution Fourier transformation --- p.13 / Chapter 2.7 --- User interfaces for Image Segmentation --- p.13 / Chapter 2.7.1 --- Intelligent Scissors --- p.14 / Chapter 2.7.2 --- Magic Wands --- p.16 / Chapter 3 --- Prior Work : Active Contours Model (Snakes) --- p.17 / Chapter 3.1 --- Introduction to Active Contour Model --- p.18 / Chapter 3.2 --- Variants and Extensions of Snakes --- p.19 / Chapter 3.2.1 --- Balloons --- p.20 / Chapter 3.2.2 --- Robust Dual Active Contour --- p.21 / Chapter 3.2.3 --- Gradient Vector Flow Snakes --- p.22 / Chapter 3.2.4 --- Energy Minimization using Dynamic Programming with pres- ence of hard constraints --- p.23 / Chapter 3.3 --- Conclusions --- p.25 / Chapter 4 --- Slimmed Graph --- p.26 / Chapter 4.1 --- BSP-based image analysis --- p.27 / Chapter 4.2 --- Split Line Selection --- p.29 / Chapter 4.3 --- Split Line Selection with Summed Area Table --- p.29 / Chapter 4.4 --- Neighbor blocks --- p.31 / Chapter 4.5 --- Slimmed Graph Generation --- p.32 / Chapter 4.6 --- Time Complexity --- p.35 / Chapter 4.7 --- Results and Conclusions --- p.36 / Chapter 5 --- Fast Intelligent Scissor --- p.38 / Chapter 5.1 --- Background --- p.39 / Chapter 5.2 --- Motivation of Fast Intelligent Scissors --- p.39 / Chapter 5.3 --- Main idea of Fast Intelligent Scissors --- p.40 / Chapter 5.3.1 --- Node position and Cost function --- p.41 / Chapter 5.4 --- Implementation and Results --- p.42 / Chapter 5.5 --- Conclusions --- p.43 / Chapter 6 --- 3D Contour Detection: Volume Cutting --- p.50 / Chapter 6.1 --- Interactive Volume Cutting with the intelligent scissors --- p.51 / Chapter 6.2 --- Contour Selection --- p.52 / Chapter 6.2.1 --- 3D Intelligent Scissors --- p.53 / Chapter 6.2.2 --- Dijkstra's algorithm --- p.54 / Chapter 6.3 --- 3D Volume Cutting --- p.54 / Chapter 6.3.1 --- Cost function for the cutting surface --- p.55 / Chapter 6.3.2 --- "Continuity function (x,y, z) " --- p.59 / Chapter 6.3.3 --- Finding the cutting surface --- p.61 / Chapter 6.3.4 --- Topological problems for the volume cutting --- p.61 / Chapter 6.3.5 --- Assumptions for the well-conditional contour used in our algo- rithm --- p.62 / Chapter 6.4 --- Implementation and Results --- p.64 / Chapter 6.5 --- Conclusions --- p.64 / Chapter 7 --- Conclusions --- p.71 / Chapter 7.1 --- Contributions --- p.71 / Chapter 7.2 --- Future Work --- p.72 / Chapter 7.2.1 --- Real-time interactive tools with Slimmed Graph --- p.72 / Chapter 7.2.2 --- 3D slimmed graph --- p.72 / Chapter 7.2.3 --- Cartoon Film Generation System --- p.72
213

Binocular tone mapping. / 雙目色調映射 / CUHK electronic theses & dissertations collection / Shuang mu se diao ying she

January 2012 (has links)
隨著3D電影和遊戲的蓬勃發展,雙目(立體)顯示設備日益流行,也變得更為廉價。 立體顯示設備 引入了一個額外的圖像空間,使得用於顯示的圖像域翻倍(一個圖像域對應左眼,另一個對應右眼)。 目前的雙目(立體)顯示設備主要把這個額外的圖像空間用於顯示三維立體信息。 / 人們的雙目視覺系統不僅可以把雙眼看到的具有深度差異信息的兩個圖像融合起來,而且可以把兩個在亮度,色彩, 對比度,甚至是內容細節上有一定程度不同的圖像融合到一起,形成一個單一的視界。 這個現象叫做雙眼單視界(Binocular Single Vision)。通過一些列複雜的神經生理融合過成,人們可以通過雙眼單視界比只用任意一隻單眼 觀察到更多視覺內容和信息,其獲得的信息量也多於兩個視野的線性組合。 / 在本畢業論文中,雙眼單視界首次被應用到了計算機圖形學領域,基於這一現象,提出了一個新穎的雙目色調映射框架(Binocular Tone Mapping Framework)。對於輸入的高動態範圍(High-Dynamic Range, HDR)圖像,我們的雙目色調映射 構架將生成一組用於雙目觀看的低動態範圍(Low-Dynamic Range, LDR)圖像對,用以從原HDR圖像中保留 更多的人們可感知到的視覺內容和信息。 給定任意一個指定的色調映射方法,我們的雙目計算框架首先通過使用其默認或者 人工選擇的參數生成一張LDR圖像(不失一般性,我們指定為左視野圖),隨後,圖像對中的另一張LDR圖像 將由系統從同一HDR圖像源使用最優化算法生成。 結果的兩張LDR圖像是不相同的,它們分別保留了不同的視覺信息。通過使用雙目顯示設備,它們可以合計表現出比任一單張LDR圖像更豐富的圖像內容。 / 人們的兩個視野對圖像差異不是無限的,也存在一個容忍度。一旦超過了某個限制閾值,視覺上的不適感覺就會出現。 了避免不適 的產生,我們設計了一個全新的雙目視覺舒適預測預器(Binocular Viewing Comfort predictor)用以預測 雙目視覺的不舒適閾值。 在我們的雙目色調映射構架中,BVCP用於指導LDR圖像對的生成,同時避免觸發 任何視覺不適。 通過一些列的實驗和用戶調查,我們提出的工作框架的有效性以及BVCP預測不適閾值的準確程度都得到了驗證。 / With the booming of 3D movies and video games, binocular (stereo) display devices become more and more popular and affordable. By introducing one additional image space, stereo displays double the image domains for visualization, one for the left eye and the other for the right eye. Existing binocular display systems only utilize this dual image domain for stereopsis. / Our human binocular vision is not only able to fuse two images with disparity, but also two images with difference in luminance, contrast and even detail, into a single percept, up to a certain limit. This phenomenon is known as binocular single vision. By a complicated neurophysiologic fusion process, humans can perceive more visual content via binocular single vision than one arbitrary single view or the linear blending of two views. / In this thesis, for the first time, binocular single vision has been utilized into computer graphics. Based on this phenomenon, a novel binocular tone mapping framework is proposed. From the source high-dynamic range (HDR) image, the proposed framework generates a binoc- ular low-dynamic range (LDR) image pair that preserves more human- perceivable visual content than a single LDR image using the additional image domain. Given a tone mapping method, our framework firstly generates one tone-mapped LDR image (left, without loss of generality) by the default or user selected parameters. Then its counterpart image (right) of the LDR pair is optimally synthesized from the same source HDR image. The two LDR images are not identical, and contain different visual information. Via binocular displays, they can aggregately present more human-perceivable visual richness than a single arbitrary LDR image. / Human binocular vision has a tolerance on the difference between two views. When such limit is exceeded, binocular viewing discomfort appears. To prevent such visual discomfort, a novel binocular view- ing comfort predictor (BVCP) is also proposed to predict the comfort threshold of binocular vision. In our framework, BVCP is used to guide the generation of LDR image pair without triggering visual discomfort. Through several user studies, the effectiveness of the proposed framework in increasing human-perceivable visual richness and the pre- dictability of the proposed BVCP in predicting the binocular discomfort threshold have been demonstrated and validated. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Yang, Xuan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 108-115). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Study --- p.5 / Chapter 2.1 --- Stereo Display --- p.5 / Chapter 2.2 --- HDR Tone Mapping --- p.9 / Chapter 2.2.1 --- HDR lmage --- p.9 / Chapter 2.2.2 --- Tone Mapping --- p.11 / Chapter 3 --- Binocular Vision --- p.16 / Chapter 3.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.2 --- Motor Fusion and Sensory Fusion --- p.19 / Chapter 3.1.3 --- Fusion, Suppression and Rivalry --- p.21 / Chapter 3.1.4 --- Rivalry --- p.23 / Chapter 3.1.5 --- Fusional Theory --- p.24 / Chapter 3.1.6 --- Fusion with Stereopsis --- p.27 / Chapter 3.2 --- Binocular discomfort --- p.29 / Chapter 3.2.1 --- Fusional area --- p.31 / Chapter 3.2.2 --- Contour difference --- p.32 / Chapter 3.2.3 --- Failure of rivalry --- p.33 / Chapter 3.2.4 --- Contour and regional contrast --- p.34 / Chapter 4 --- Binocular Visual Comfort Predictor (BVCP) --- p.37 / Chapter 4.1 --- Introduction --- p.37 / Chapter 4.2 --- Design of BVCP --- p.40 / Chapter 4.2.1 --- Fusional Area --- p.40 / Chapter 4.2.2 --- Contour Fusion --- p.42 / Chapter 4.2.3 --- Failure of Rivalry --- p.48 / Chapter 4.2.4 --- Contour and Regional Contrast --- p.53 / Chapter 4.2.5 --- The Overall Fusion Predictor --- p.54 / Chapter 4.3 --- Experiments and User Study --- p.56 / Chapter 4.4 --- Discussion --- p.60 / Chapter 5 --- Binocular Tone Mapping --- p.62 / Chapter 5.1 --- Introduction --- p.62 / Chapter 5.2 --- Binocular Tone Mapping Framework --- p.66 / Chapter 5.2.1 --- System Overview --- p.66 / Chapter 5.2.2 --- Optimization --- p.68 / Chapter 5.3 --- Experiments and Results --- p.71 / Chapter 5.4 --- Userstudy --- p.77 / Chapter 5.4.1 --- Visual Richness --- p.77 / Chapter 5.4.2 --- Binocular Symmetry --- p.81 / Chapter 5.5 --- Discussion --- p.82 / Chapter 5.5.1 --- Incorporating Stereopsis --- p.82 / Chapter 5.5.2 --- Limitation --- p.84 / Chapter 5.5.3 --- Extension --- p.85 / Chapter 6 --- Conclusion --- p.91 / Chapter 6.1 --- Contribution --- p.91 / Chapter 6.2 --- Future Work --- p.92 / Chapter A --- More Results of Binocular Tone Mapping --- p.94 / Chapter B --- Test Sequence for BVCP --- p.103 / Bibliography --- p.108
214

An Inertial-Optical Tracking System for Quantitative, Freehand, 3D Ultrasound

Goldsmith, Abraham Myron 16 January 2009 (has links)
Three dimensional (3D) ultrasound has become an increasingly popular medical imaging tool over the last decade. It offers significant advantages over Two Dimensional (2D) ultrasound, such as improved accuracy, the ability to display image planes that are physically impossible with 2D ultrasound, and reduced dependence on the skill of the sonographer. Among 3D medical imaging techniques, ultrasound is the only one portable enough to be used by first responders, on the battlefield, and in rural areas. There are three basic methods of acquiring 3D ultrasound images. In the first method, a 2D array transducer is used to capture a 3D volume directly, using electronic beam steering. This method is mainly used for echocardiography. In the second method, a linear array transducer is mechanically actuated, giving a slower and less expensive alternative to the 2D array. The third method uses a linear array transducer that is moved by hand. This method is known as freehand 3D ultrasound. Whether using a 2D array or a mechanically actuated linear array transducer, the position and orientation of each image is known ahead of time. This is not the case for freehand scanning. To reconstruct a 3D volume from a series of 2D ultrasound images, assumptions must be made about the position and orientation of each image, or a mechanism for detecting the position and orientation of each image must be employed. The most widely used method for freehand 3D imaging relies on the assumption that the probe moves along a straight path with constant orientation and speed. This method requires considerable skill on the part of the sonographer. Another technique uses features within the images themselves to form an estimate of each image's relative location. However, these techniques are not well accepted for diagnostic use because they are not always reliable. The final method for acquiring position and orientation information is to use a six Degree-of-Freedom (6 DoF) tracking system. Commercially available 6 DoF tracking systems use magnetic fields, ultrasonic ranging, or optical tracking to measure the position and orientation of a target. Although accurate, all of these systems have fundamental limitations in that they are relatively expensive and they all require sensors or transmitters to be placed in fixed locations to provide a fixed frame of reference. The goal of the work presented here is to create a probe tracking system for freehand 3D ultrasound that does not rely on any fixed frame of reference. This system tracks the ultrasound probe using only sensors integrated into the probe itself. The advantages of such a system are that it requires no setup before it can be used, it is more portable because no extra equipment is required, it is immune from environmental interference, and it is less expensive than external tracking systems. An ideal tracking system for freehand 3D ultrasound would track in all 6 DoF. However, current sensor technology limits this system to five. Linear transducer motion along the skin surface is tracked optically and transducer orientation is tracked using MEMS gyroscopes. An optical tracking system was developed around an optical mouse sensor to provide linear position information by tracking the skin surface. Two versions were evaluated. One included an optical fiber bundle and the other did not. The purpose of the optical fiber is to allow the system to integrate more easily into existing probes by allowing the sensor and electronics to be mounted away from the scanning end of the probe. Each version was optimized to track features on the skin surface while providing adequate Depth Of Field (DOF) to accept variation in the height of the skin surface. Orientation information is acquired using a 3 axis MEMS gyroscope. The sensor was thoroughly characterized to quantify performance in terms of accuracy and drift. This data provided a basis for estimating the achievable 3D reconstruction accuracy of the complete system. Electrical and mechanical components were designed to attach the sensor to the ultrasound probe in such a way as to simulate its being embedded in the probe itself. An embedded system was developed to perform the processing necessary to translate the sensor data into probe position and orientation estimates in real time. The system utilizes a Microblaze soft core microprocessor and a set of peripheral devices implemented in a Xilinx Spartan 3E field programmable gate array. The Xilinx Microkernel real time operating system performs essential system management tasks and provides a stable software platform for implementation of the inertial tracking algorithm. Stradwin 3D ultrasound software was used to provide a user interface and perform the actual 3D volume reconstruction. Stradwin retrieves 2D ultrasound images from the Terason t3000 portable ultrasound system and communicates with the tracking system to gather position and orientation data. The 3D reconstruction is generated and displayed on the screen of the PC in real time. Stradwin also provides essential system features such as storage and retrieval of data, 3D data interaction, reslicing, manual 3D segmentation, and volume calculation for segmented regions. The 3D reconstruction performance of the system was evaluated by freehand scanning a cylindrical inclusion in a CIRS model 044 ultrasound phantom. Five different motion profiles were used and each profile was repeated 10 times. This entire test regimen was performed twice, once with the optical tracking system using the optical fiber bundle, and once with the optical tracking system without the optical fiber bundle. 3D reconstructions were performed with and without the position and orientation data to provide a basis for comparison. Volume error and surface error were used as the performance metrics. Volume error ranged from 1.3% to 5.3% with tracking information versus 15.6% to 21.9% without for the version of the system without the optical fiber bundle. Volume error ranged from 3.7% to 7.6% with tracking information versus 8.7% to 13.7% without for the version of the system with the optical fiber bundle. Surface error ranged from 0.319 mm RMS to 0.462 mm RMS with tracking information versus 0.678 mm RMS to 1.261 mm RMS without for the version of the system without the optical fiber bundle. Surface error ranged from 0.326 mm RMS to 0.774 mm RMS with tracking information versus 0.538 mm RMS to 1.657 mm RMS without for the version of the system with the optical fiber bundle. The prototype tracking system successfully demonstrated that accurate 3D ultrasound volumes can be generated from 2D freehand data using only sensors integrated into the ultrasound probe. One serious shortcoming of this system is that it only tracks 5 of the 6 degrees of freedom required to perform complete 3D reconstructions. The optical system provides information about linear movement but because it tracks a surface, it cannot measure vertical displacement. Overcoming this limitation is the most obvious candidate for future research using this system. The overall tracking platform, meaning the embedded tracking computer and the PC software, developed and integrated in this work, is ready to take advantage of vertical displacement data, should a method be developed for sensing it.
215

Lagrangian study of the Southern Ocean circulation

McAufield, Ewa Katarzyna January 2019 (has links)
The Southern Ocean is an important region for the sequestration of heat, carbon dioxide and other tracers. The Southern Ocean circulation is typically described in a circumpolarly averaged sense as a Meridional Overturning Circulation (MOC), but the detailed 3-D pathways that make up this circulation remain poorly understood. We use Lagrangian particle trajectories, obtained from eddy permitting numerical models, to map out and quantify different aspects of the 3-D circulation. We first introduce various definitions used to quantify efficient export from the Antarctic Circumpolar Current (ACC) to the subtropical gyres. Using these definitions, we show that the permanent northward export varies by water mass and occurs in localised regions; with 11 key pathways identified. We then examine the dynamics setting the location and efficiency of the identified pathways, which includes the investigation of the role of diapycnal mixing and the impact of short and long time variability in the flow. Although we show that the flow of particles in the 3-D model is predominantly isopycnal, we find that particles that are forced to remain on isopycnals lead to approx. 60% lower export (mainly via three pathways) than identical releases where the diapycnal component of advection is included. Enhanced upward mixing near rough topography, and downward mixing in the southeast Pacific, were shown to be mostly responsible for the export. In addition, we show that most of the export pathways are mainly influenced by timescales from 90 days to 20 years, which suggests that mesoscale eddies are not the leading-order importance in the northward export from the ACC to the subtropical gyres. However, we also find that mesoscale eddies and the mean-ACC flow play a significant role in setting the export from the ACC in some pathways. These results highlight the role of temporal variability and vertical transport in enhancing the northward flow from the ACC by allowing transport across barotropic streamlines and onto more efficiently exporting isopycnals. In addition, the asymmetrical response of the studied quantities emphasises the importance of the three dimensions in understanding the dynamics driving the overturning circulation. We also demonstrated that the annually repeating velocity fields, which are commonly used for trajectory calculations, increase the diapycnal transport of particles and as a consequence, increase the overall 20-year northward export from the ACC by approx. 10%. In the study of the meridional overturning circulation, we diagnose the geographical distribution of the streamwise averaged diffusivity calculated from meridional displacements of the Lagrangian particles. We examine streamwise averaging using both latitude and equivalent latitude and argue that the latter gives a more useful measure. Reconciling tracer and particle horizontal diffusivities, we show that in the ACC, the average diffusivity peaks between 1500m and 2500m with an average value of 1500 m$^{2}$/s and that it is highest near the topographic features. We compare the exact diffusivity and its approximation to show that an assumption of time homogeneity does not hold and therefore that standard expressions for diffusivity that assume time homogeneity are of limited usefulness. Finally, we use the calculated trajectories to provide a streamwise averaged 2-D advection-diffusion model of the Southern Ocean MOC and then examine the extent to which this 2-D model can capture the overall effect of the actual 3-D transport.
216

Personalized perspectives in 3-D assembly.

Stead, Lawrence Scarritt January 1978 (has links)
Thesis. 1978. M.S.--Massachusetts Institute of Technology. Dept. of Architecture. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH. / Bibliography: leaf 35. / M.S.
217

Three-dimensional medical ultrasound image reconstruction using noise reduction and data compression. / CUHK electronic theses & dissertations collection

January 1998 (has links)
by Xiang Shao hua. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (p. 233-[248]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
218

3D Object Understanding from RGB-D Data

Feng, Jie January 2017 (has links)
Understanding 3D objects and being able to interact with them in the physical world are essential for building intelligent computer vision systems. It has tremendous potentials for various applications ranging from augmented reality, 3D printing to robotics. It might seem simple for human to look and make sense of the visual world, it is however a complicated process for machines to accomplish similar tasks. Generally, the system is involved with a series of processes: identify and segment a target object, estimate its 3D shape and predict its pose in an open scene where the target objects may have not been seen before. Although considerable research works have been proposed to tackle these problems, they remain very challenging due to a few key issues: 1) most methods rely solely on color images for interpreting the 3D property of an object; 2) large labeled color images are expensive to get for tasks like pose estimation, limiting the ability to train powerful prediction models; 3) training data for the target object is typically required for 3D shape estimation and pose prediction, making these methods hard to scale and generalize to unseen objects. Recently, several technological changes have created interesting opportunities for solving these fundamental vision problems. Low-cost depth sensors become widely available that provides an additional sensory input as a depth map which is very useful for extracting 3D information of the object and scene. On the other hand, with the ease of 3D object scanning with depth sensors and open access to large scale 3D model database like 3D warehouse and ShapeNet, it is possible to leverage such data to build powerful learning models. Third, machine learning algorithm like deep learning has become powerful that it starts to surpass state-of-the-art or even human performance on challenging tasks like object recognition. It is now feasible to learn rich information from large datasets in a single model. The objective of this thesis is to leverage such emerging tools and data to solve the above mentioned challenging problems for understanding 3D objects with a new perspective by designing machine learning algorithms utilizing RGB-D data. Instead of solely depending on color images, we combine both color and depth images to achieve significantly higher performance for object segmentation. We use large collection of 3D object models to provide high quality training data and retrieve visually similar 3D CAD models from low-quality captured depth images which enables knowledge transfer from database objects to target object in an observed scene. By using content-based 3D shape retrieval, we also significantly improve pose estimation via similar proxy models without the need to create the exact 3D model as a reference.
219

3D object reconstruction from line drawings.

January 2005 (has links)
Cao Liangliang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 64-69). / Abstracts in English and Chinese. / Chapter 1 --- Introduction and Related Work --- p.1 / Chapter 1.1 --- Reconstruction from Single Line Drawings and the Applications --- p.1 / Chapter 1.2 --- Optimization-based Reconstruction --- p.2 / Chapter 1.3 --- Other Reconstruction Methods --- p.2 / Chapter 1.3.1 --- Line Labeling and Algebraic Methods --- p.2 / Chapter 1.3.2 --- CAD Reconstruction --- p.3 / Chapter 1.3.3 --- Modelling from Images --- p.3 / Chapter 1.4 --- Finding Faces of Line Drawings --- p.4 / Chapter 1.5 --- Generalized Cylinder --- p.4 / Chapter 1.6 --- Research Problems and Our Contribution --- p.5 / Chapter 1.6.1 --- A New Criteria --- p.5 / Chapter 1.6.2 --- Recover Objects from Line Drawings without Hidden Lines --- p.6 / Chapter 1.6.3 --- Reconstruction of Curved Objects --- p.6 / Chapter 1.6.4 --- Planar Limbs Assumption and the Derived Models --- p.6 / Chapter 2 --- A New Criteria for Reconstruction --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Human Visual Perception and the Symmetry Measure --- p.10 / Chapter 2.3 --- Reconstruction Based on Symmetry and Planarity --- p.11 / Chapter 2.3.1 --- Finding Faces --- p.11 / Chapter 2.3.2 --- Constraint of Planarity --- p.11 / Chapter 2.3.3 --- Objective Function --- p.12 / Chapter 2.3.4 --- Reconstruction Algorithm --- p.13 / Chapter 2.4 --- Experimental Results --- p.13 / Chapter 2.5 --- Summary --- p.18 / Chapter 3 --- Line Drawings without Hidden Lines: Inference and Reconstruction --- p.19 / Chapter 3.1 --- Introduction --- p.19 / Chapter 3.2 --- Terminology --- p.20 / Chapter 3.3 --- Theoretical Inference of the Hidden Topological Structure --- p.21 / Chapter 3.3.1 --- Assumptions --- p.21 / Chapter 3.3.2 --- Finding the Degrees and Ranks --- p.22 / Chapter 3.3.3 --- Constraints for the Inference --- p.23 / Chapter 3.4 --- An Algorithm to Recover the Hidden Topological Structure --- p.25 / Chapter 3.4.1 --- Outline of the Algorithm --- p.26 / Chapter 3.4.2 --- Constructing the Initial Hidden Structure --- p.26 / Chapter 3.4.3 --- Reducing Initial Hidden Structure --- p.27 / Chapter 3.4.4 --- Selecting the Most Plausible Structure --- p.28 / Chapter 3.5 --- Reconstruction of 3D Objects --- p.29 / Chapter 3.6 --- Experimental Results --- p.32 / Chapter 3.7 --- Summary --- p.32 / Chapter 4 --- Curved Objects Reconstruction from 2D Line Drawings --- p.35 / Chapter 4.1 --- Introduction --- p.35 / Chapter 4.2 --- Related Work --- p.36 / Chapter 4.2.1 --- Face Identification --- p.36 / Chapter 4.2.2 --- 3D Reconstruction of planar objects --- p.37 / Chapter 4.3 --- Reconstruction of Curved Objects --- p.37 / Chapter 4.3.1 --- Transformation of Line Drawings --- p.37 / Chapter 4.3.2 --- Finding 3D Bezier Curves --- p.39 / Chapter 4.3.3 --- Bezier Surface Patches and Boundaries --- p.40 / Chapter 4.3.4 --- Generating Bezier Surface Patches --- p.41 / Chapter 4.4 --- Results --- p.43 / Chapter 4.5 --- Summary --- p.45 / Chapter 5 --- Planar Limbs and Degen Generalized Cylinders --- p.47 / Chapter 5.1 --- Introduction --- p.47 / Chapter 5.2 --- Planar Limbs and View Directions --- p.49 / Chapter 5.3 --- DGCs in Homogeneous Coordinates --- p.53 / Chapter 5.3.1 --- Homogeneous Coordinates --- p.53 / Chapter 5.3.2 --- Degen Surfaces --- p.54 / Chapter 5.3.3 --- DGCs --- p.54 / Chapter 5.4 --- Properties of DGCs --- p.56 / Chapter 5.5 --- Potential Applications --- p.59 / Chapter 5.5.1 --- Recovery of DGC Descriptions --- p.59 / Chapter 5.5.2 --- Deformable DGCs --- p.60 / Chapter 5.6 --- Summary --- p.61 / Chapter 6 --- Conclusion and Future Work --- p.62 / Bibliography --- p.64
220

Human computer interaction: a vision-based approach for American sign language recognition. / CUHK electronic theses & dissertations collection

January 2002 (has links)
Deng Jiangwen. / "April 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 156-170). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.

Page generated in 0.108 seconds