• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1416
  • 616
  • 202
  • 169
  • 106
  • 96
  • 84
  • 75
  • 62
  • 55
  • 32
  • 27
  • 20
  • 17
  • 10
  • Tagged with
  • 3394
  • 1667
  • 531
  • 350
  • 343
  • 324
  • 299
  • 298
  • 269
  • 244
  • 204
  • 192
  • 166
  • 162
  • 155
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

三國時期的地方勢力. / San guo shi qi de di fang shi li.

January 1974 (has links)
論文(碩士)--香港中文大學. / 參考文獻: l. 307-311. / Manuscript. / 引言 --- p.1-6 / Chapter 第一章 --- 地方勢力的成員及其政治活動 --- p.7-62 / Chapter 第一節 --- 地方勢力的成員及其身份的演变 --- p.7-21 / Chapter 第二節 --- 流民與流民集團 --- p.22-34 / Chapter 第三節 --- 地方勢力的政治活動 --- p.35-62 / Chapter 第二章 --- 地方勢力的動向 --- p.63-189 / Chapter 第一節 --- 曹操初起事時的基本武力與譙沛集團 --- p.63-90 / Chapter 第二節 --- 劉氏政權與豫、徐、荊、益等州的地方勢力 --- p.91-122 / Chapter 第三節 --- 江東的孫吳政權 --- p.123-144 / Chapter 第四節 --- 封建制度下双重君臣倫理關係與地方勢力選擇歸附的矛盾 --- p.145-165 / Chapter 第五節 --- 地方勢力的反覆 --- p.166-189 / Chapter 第三章 --- 群雄與地方勢力的關係 --- p.190-302 / Chapter 第一節 --- 曹操平定冀州前中原地區群雄勢力的消長與曹的妥協政策 --- p.190-217 / Chapter 第二節 --- 曹操平定冀州後政策的轉變 --- p.218-229 / Chapter 第三節 --- 袁紹、公孫瓚、公孫度、陶謙、劉備、劉表、劉焉等人的政策 --- p.230-244 / Chapter 第四節 --- 孫吳在江東的政策 --- p.245-267 / Chapter 第五節 --- 妥協政策的形成 --- p.268-288 / Chapter (一) --- 割地 / Chapter (二) --- 联婚 / Chapter (三) --- 質任 / Chapter 第六節 --- 壓制政策´ؤ´ؤ有漸進的分化到強制徙民 --- p.289-302 / 結論 --- p.303-306 / 參考書目 --- p.307-311
352

Image motion estimation for 3D model based video conferencing.

January 2000 (has links)
Cheung Man-kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 116-120). / Abstracts in English and Chinese. / Chapter 1) --- Introduction --- p.1 / Chapter 1.1) --- Building of the 3D Wireframe and Facial Model --- p.2 / Chapter 1.2) --- Description of 3D Model Based Video Conferencing --- p.3 / Chapter 1.3) --- Wireframe Model Fitting or Conformation --- p.6 / Chapter 1.4) --- Pose Estimation --- p.8 / Chapter 1.5) --- Facial Motion Estimation and Synthesis --- p.9 / Chapter 1.6) --- Thesis Outline --- p.10 / Chapter 2) --- Wireframe model Fitting --- p.11 / Chapter 2.1) --- Algorithm of WFM Fitting --- p.12 / Chapter 2.1.1) --- Global Deformation --- p.14 / Chapter a) --- Scaling --- p.14 / Chapter b) --- Shifting --- p.15 / Chapter 2.1.2) --- Local Deformation --- p.15 / Chapter a) --- Shifting --- p.16 / Chapter b) --- Scaling --- p.17 / Chapter 2.1.3) --- Fine Updating --- p.17 / Chapter 2.2) --- Steps of Fitting --- p.18 / Chapter 2.3) --- Functions of Different Deformation --- p.18 / Chapter 2.4) --- Experimental Results --- p.19 / Chapter 2.4.1) --- Output wireframe in each step --- p.19 / Chapter 2.4.2) --- Examples of Mis-fitted wireframe with incoming image --- p.22 / Chapter 2.4.3) --- Fitted 3D facial wireframe --- p.23 / Chapter 2.4.4) --- Effect of mis-fitted wireframe after compensation of motion --- p.24 / Chapter 2.5) --- Summary --- p.26 / Chapter 3) --- Epipolar Geometry --- p.27 / Chapter 3.1) --- Pinhole Camera Model and Perspective Projection --- p.28 / Chapter 3.2) --- Concepts in Epipolar Geometry --- p.31 / Chapter 3.2.1) --- Working with normalized image coordinates --- p.33 / Chapter 3.2.2) --- Working with pixel image coordinates --- p.35 / Chapter 3.2.3) --- Summary --- p.37 / Chapter 3.3) --- 8-point Algorithm (Essential and Fundamental Matrix) --- p.38 / Chapter 3.3.1) --- Outline of the 8-point algorithm --- p.38 / Chapter 3.3.2) --- Modification on obtained Fundamental Matrix --- p.39 / Chapter 3.3.3) --- Transformation of Image Coordinates --- p.40 / Chapter a) --- Translation to mean of points --- p.40 / Chapter b) --- Normalizing transformation --- p.41 / Chapter 3.3.4) --- Summary of 8-point algorithm --- p.41 / Chapter 3.4) --- Estimation of Object Position by Decomposition of Essential Matrix --- p.43 / Chapter 3.4.1) --- Algorithm Derivation --- p.43 / Chapter 3.4.2) --- Algorithm Outline --- p.46 / Chapter 3.5) --- Noise Sensitivity --- p.48 / Chapter 3.5.1) --- Rotation vector of model --- p.48 / Chapter 3.5.2) --- The projection of rotated model --- p.49 / Chapter 3.5.3) --- Noisy image --- p.51 / Chapter 3.5.4) --- Summary --- p.51 / Chapter 4) --- Pose Estimation --- p.54 / Chapter 4.1) --- Linear Method --- p.55 / Chapter 4.1.1) --- Theory --- p.55 / Chapter 4.1.2) --- Normalization --- p.57 / Chapter 4.1.3) --- Experimental Results --- p.58 / Chapter a) --- Synthesized image by linear method without normalization --- p.58 / Chapter b) --- Performance between linear method with and without normalization --- p.60 / Chapter c) --- Performance of linear method under quantization noise with different transformation components --- p.62 / Chapter d) --- Performance of normalized case without transformation in z- component --- p.63 / Chapter 4.1.4) --- Summary --- p.64 / Chapter 4.2) --- Two Stage Algorithm --- p.66 / Chapter 4.2.1) --- Introduction --- p.66 / Chapter 4.2.2) --- The Two Stage Algorithm --- p.67 / Chapter a) --- Stage 1 (Iterative Method) --- p.68 / Chapter b) --- Stage 2 ( Non-linear Optimization) --- p.71 / Chapter 4.2.3) --- Summary of the Two Stage Algorithm --- p.72 / Chapter 4.2.4) --- Experimental Results --- p.72 / Chapter 4.2.5) --- Summary --- p.80 / Chapter 5) --- Facial Motion Estimation and Synthesis --- p.81 / Chapter 5.1) --- Facial Expression based on face muscles --- p.83 / Chapter 5.1.1) --- Review of Action Unit Approach --- p.83 / Chapter 5.1.2) --- Distribution of Motion Unit --- p.85 / Chapter 5.1.3) --- Algorithm --- p.89 / Chapter a) --- For Unidirectional Motion Unit --- p.89 / Chapter b) --- For Circular Motion Unit (eyes) --- p.90 / Chapter c) --- For Another Circular Motion Unit (mouth) --- p.90 / Chapter 5.1.4) --- Experimental Results --- p.91 / Chapter 5.1.5) --- Summary --- p.95 / Chapter 5.2) --- Detection of Facial Expression by Muscle-based Approach --- p.96 / Chapter 5.2.1) --- Theory --- p.96 / Chapter 5.2.2) --- Algorithm --- p.97 / Chapter a) --- For Sheet Muscle --- p.97 / Chapter b) --- For Circular Muscle --- p.98 / Chapter c) --- For Mouth Muscle --- p.99 / Chapter 5.2.3) --- Steps of Algorithm --- p.100 / Chapter 5.2.4) --- Experimental Results --- p.101 / Chapter 5.2.5) --- Summary --- p.103 / Chapter 6) --- Conclusion --- p.104 / Chapter 6.1) --- WFM fitting --- p.104 / Chapter 6.2) --- Pose Estimation --- p.105 / Chapter 6.3) --- Facial Estimation and Synthesis --- p.106 / Chapter 6.4) --- Discussion on Future Improvements --- p.107 / Chapter 6.4.1) --- WFM Fitting --- p.107 / Chapter 6.4.2) --- Pose Estimation --- p.109 / Chapter 6.4.3) --- Facial Motion Estimation and Synthesis --- p.110 / Chapter 7) --- Appendix --- p.111 / Chapter 7.1) --- Newton's Method or Newton-Raphson Method --- p.111 / Chapter 7.2) --- H.261 --- p.113 / Chapter 7.3) --- 3D Measurement --- p.114 / Bibliography --- p.116
353

Isosurface extraction and haptic rendering of volumetric data.

January 2000 (has links)
Kwong-Wai, Chen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 114-118). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Volumetric Data --- p.1 / Chapter 1.2 --- Volume Visualization --- p.4 / Chapter 1.3 --- Thesis Contributions --- p.5 / Chapter 1.4 --- Thesis Outline --- p.6 / Chapter I --- Multi-body Surface Extraction --- p.8 / Chapter 2 --- Isosurface Extraction --- p.9 / Chapter 2.1 --- Previous Works --- p.10 / Chapter 2.1.1 --- Marching Cubes --- p.10 / Chapter 2.1.2 --- Skeleton Climbing --- p.12 / Chapter 2.1.3 --- Adaptive Skeleton Climbing --- p.14 / Chapter 2.2 --- Motivation --- p.17 / Chapter 3 --- Multi-body Surface Extraction --- p.19 / Chapter 3.1 --- Multi-body Surface --- p.19 / Chapter 3.2 --- Building 0-skeleton --- p.21 / Chapter 3.3 --- Building 1-skeleton --- p.23 / Chapter 3.3.1 --- Non-binary Faces --- p.24 / Chapter 3.3.2 --- Non-binary Cubes --- p.30 / Chapter 3.4 --- General Scheme for Messy Cubes --- p.33 / Chapter 3.4.1 --- Graph Reduction --- p.34 / Chapter 3.4.2 --- Position of the Tetrapoints --- p.36 / Chapter 3.5 --- Triangular Mesh Generation --- p.37 / Chapter 3.5.1 --- Generating the Edge Loops --- p.38 / Chapter 3.5.2 --- Triangulating the Edge Loops --- p.41 / Chapter 3.5.3 --- Incorporating with Adaptive Skeleton Climbing --- p.43 / Chapter 3.6 --- Implementation and Results --- p.45 / Chapter II --- Haptic Rendering of Volumetric Data --- p.60 / Chapter 4 --- Introduction to Haptics --- p.61 / Chapter 4.1 --- Terminology --- p.62 / Chapter 4.2 --- Haptic Rendering Process --- p.63 / Chapter 4.2.1 --- The Overall Process --- p.64 / Chapter 4.2.2 --- Force Profile --- p.65 / Chapter 4.2.3 --- Decoupling Processes --- p.66 / Chapter 4.3 --- The PHANToM´ёØ Haptic Interface --- p.67 / Chapter 4.4 --- Research Goals --- p.69 / Chapter 5 --- Haptic Rendering of Geometric Models --- p.70 / Chapter 5.1 --- Penalty Based Methods --- p.71 / Chapter 5.1.1 --- Vector Fields for Solid Objects --- p.71 / Chapter 5.1.2 --- Drawbacks of Penalty Based Methods --- p.72 / Chapter 5.2 --- Constraint Based Methods --- p.73 / Chapter 5.2.1 --- Virtual Haptic Interface Point --- p.73 / Chapter 5.2.2 --- The Constraints --- p.74 / Chapter 5.2.3 --- Location Computation --- p.78 / Chapter 5.2.4 --- Force Shading --- p.79 / Chapter 5.2.5 --- Adding Surface Properties --- p.80 / Chapter 6 --- Haptic Rendering of Volumetric Data --- p.83 / Chapter 6.1 --- Volume Haptization --- p.84 / Chapter 6.2 --- Isosurface Haptic Rendering --- p.86 / Chapter 6.3 --- Intermediate Representation Approach --- p.89 / Chapter 6.3.1 --- Introduction --- p.89 / Chapter 6.3.2 --- Intermediate Virtual Plane --- p.90 / Chapter 6.3.3 --- Updating Virtual Plane --- p.92 / Chapter 6.3.4 --- Preventing Force Discontinuity Artifacts --- p.93 / Chapter 6.3.5 --- Experiments and Results --- p.94 / Chapter 7 --- Conclusions and Future Research Directions --- p.98 / Chapter 7.1 --- Conclusions --- p.98 / Chapter 7.2 --- Future Research Directions --- p.99 / Chapter A --- Two Proofs of Multi-body Surface Extraction Algorithm --- p.101 / Chapter A.1 --- Graph Terminology and Theorems --- p.101 / Chapter A.2 --- Occurrence of Tripoints in Negative-Positive Pairs --- p.103 / Chapter A.3 --- Validity of the General Scheme --- p.103 / Chapter B --- An Example of Multi-body Surface Extraction Algorithm --- p.105 / Chapter B.1 --- Step 1: Building 0-Skeleton --- p.105 / Chapter B.2 --- Step 2: Building 1-Skeleton --- p.106 / Chapter B.2.1 --- Step 2a: Building 1-Skeleton and Tripoints on Cube Faces --- p.106 / Chapter B.2.2 --- Step 2b: Adding Tetrapoints and Tri-edges inside Cube --- p.106 / Chapter B.3 --- Step 3: Constructing Edge Loops and Triangulating --- p.109 / Bibliography --- p.114
354

Stereo vision without the scene-smoothness assumption: the homography-based approach.

January 1998 (has links)
by Andrew L. Arengo. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 65-66). / Abstract also in Chinese. / Acknowledgments --- p.ii / List Of Figures --- p.v / Abstract --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objective --- p.2 / Chapter 1.2 --- Approach of This Thesis and Contributions --- p.3 / Chapter 1.3 --- Organization of This Thesis --- p.4 / Chapter 2 --- Previous Work --- p.6 / Chapter 2.1 --- Using Grouped Features --- p.6 / Chapter 2.2 --- Applying Additional Heuristics --- p.7 / Chapter 2.3 --- Homography and Related Works --- p.9 / Chapter 3 --- Theory and Problem Formulation --- p.10 / Chapter 3.1 --- Overview of the Problems --- p.10 / Chapter 3.1.1 --- Preprocessing --- p.10 / Chapter 3.1.2 --- Establishing Correspondences --- p.11 / Chapter 3.1.3 --- Recovering 3D Depth --- p.14 / Chapter 3.2 --- Solving the Correspondence Problem --- p.15 / Chapter 3.2.1 --- Epipolar Constraint --- p.15 / Chapter 3.2.2 --- Surface-Continuity and Feature-Ordering Heuristics --- p.16 / Chapter 3.2.3 --- Using the Concept of Homography --- p.18 / Chapter 3.3 --- Concept of Homography --- p.20 / Chapter 3.3.1 --- Barycentric Coordinate System --- p.20 / Chapter 3.3.2 --- Image to Image Mapping of the Same Plane --- p.22 / Chapter 3.4 --- Problem Formulation --- p.23 / Chapter 3.4.1 --- Preliminaries --- p.23 / Chapter 3.4.2 --- Case of Single Planar Surface --- p.24 / Chapter 3.4.3 --- Case of Multiple Planar Surfaces --- p.28 / Chapter 3.5 --- Subspace Clustering --- p.28 / Chapter 3.6 --- Overview of the Approach --- p.30 / Chapter 4 --- Experimental Results --- p.33 / Chapter 4.1 --- Synthetic Images --- p.33 / Chapter 4.2 --- Aerial Images --- p.36 / Chapter 4.2.1 --- T-shape building --- p.38 / Chapter 4.2.2 --- Rectangular Building --- p.39 / Chapter 4.2.3 --- 3-layers Building --- p.40 / Chapter 4.2.4 --- Pentagon --- p.44 / Chapter 4.3 --- Indoor Scenes --- p.52 / Chapter 4.3.1 --- Stereo Motion Pair --- p.53 / Chapter 4.3.2 --- Hallway Scene --- p.56 / Chapter 5 --- Summary and Conclusions --- p.63
355

Stereo vision and motion analysis in complement.

January 1998 (has links)
by Ho Pui-Kuen, Patrick. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 57-59). / Abstract also in Chinese. / Acknowledgments --- p.ii / List Of Figures --- p.v / List Of Tables --- p.vi / Abstract --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Moviation of Problem --- p.1 / Chapter 1.2 --- Our Approach and Summary of Contributions --- p.3 / Chapter 1.3 --- Organization of this Thesis --- p.4 / Chapter 2 --- Previous Work --- p.5 / Chapter 3 --- Structure Recovery from Stereo-Motion Images --- p.7 / Chapter 3.1 --- Motion Model --- p.8 / Chapter 3.2 --- Stereo-Motion Model --- p.10 / Chapter 3.3 --- Inferring Stereo Correspondences --- p.13 / Chapter 3.4 --- Determining 3D Structure from One Stereo Pair --- p.17 / Chapter 3.5 --- Computational Complexity of Inference Process --- p.18 / Chapter 4 --- Experimental Results --- p.19 / Chapter 4.1 --- Synthetic Images and Statistical Results --- p.19 / Chapter 4.2 --- Real Image Sequences --- p.21 / Chapter 4.2.1 --- House Model' Image Sequences --- p.22 / Chapter 4.2.2 --- Oscilloscope and Soda Can' Image Sequences --- p.23 / Chapter 4.2.3 --- Bowl' Image Sequences --- p.24 / Chapter 4.2.4 --- Building' Image Sequences --- p.27 / Chapter 4.3 --- Computational Time of Experiments --- p.28 / Chapter 5 --- Determining Motion and Structure from All Stereo Pairs --- p.30 / Chapter 5.1 --- Determining Motion and Structure --- p.31 / Chapter 5.2 --- Identifying Incorrect Motion Correspondences --- p.33 / Chapter 6 --- More Experiments --- p.34 / Chapter 6.1 --- Synthetic Cube' Images --- p.34 / Chapter 6.2 --- Snack Bag´ة Image Sequences --- p.35 / Chapter 6.3 --- Comparison with Structure Recovered from One Stereo Pair --- p.37 / Chapter 7 --- Conclusion --- p.41 / Chapter A --- Basic Concepts in Computer Vision --- p.43 / Chapter A.1 --- Camera Projection Model --- p.43 / Chapter A.2 --- Epipolar Constraint in Stereo Vision --- p.47 / Chapter B --- Inferring Stereo Correspondences with Matrices of Rank < 4 --- p.49 / Chapter C --- Generating Image Reprojection --- p.51 / Chapter D --- Singular Value Decomposition --- p.53 / Chapter E --- Quaternion --- p.55
356

Interactive volume visualization in a virtual environment.

January 1998 (has links)
by Yu-Hang Siu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 74-80). / Abstract also in Chinese. / Abstract --- p.iii / Acknowledgements --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Volume Visualization --- p.2 / Chapter 1.2 --- Virtual Environment --- p.11 / Chapter 1.3 --- Approach --- p.12 / Chapter 1.4 --- Thesis Overview --- p.13 / Chapter 2 --- Contour Extraction --- p.15 / Chapter 2.1 --- Concept of Intelligent Scissors --- p.16 / Chapter 2.2 --- Dijkstra's Algorithm --- p.18 / Chapter 2.3 --- Cost Function --- p.20 / Chapter 2.4 --- Summary --- p.23 / Chapter 3 --- Volume Cutting --- p.24 / Chapter 3.1 --- Basic idea of the algorithm --- p.25 / Chapter 3.2 --- Intelligent Scissors on Surface Mesh --- p.27 / Chapter 3.3 --- Internal Cutting Surface --- p.29 / Chapter 3.4 --- Summary --- p.34 / Chapter 4 --- Three-dimensional Intelligent Scissors --- p.35 / Chapter 4.1 --- 3D Graph Construction --- p.36 / Chapter 4.2 --- Cost Function --- p.40 / Chapter 4.3 --- Applications --- p.42 / Chapter 4.3.1 --- Surface Extraction --- p.42 / Chapter 4.3.2 --- Vessel Tracking --- p.47 / Chapter 4.4 --- Summary --- p.49 / Chapter 5 --- Implementations in a Virtual Environment --- p.52 / Chapter 5.1 --- Volume Cutting --- p.53 / Chapter 5.2 --- Surface Extraction --- p.56 / Chapter 5.3 --- Vessel Tracking --- p.59 / Chapter 5.4 --- Summary --- p.64 / Chapter 6 --- Conclusions --- p.68 / Chapter 6.1 --- Summary of Results --- p.68 / Chapter 6.2 --- Future Directions --- p.70 / Chapter A --- Performance of Dijkstra's Shortest Path Algorithm --- p.72 / Chapter B --- IsoRegion Construction --- p.73
357

Fast interactive 2D and 3D segmentation tools.

January 1998 (has links)
by Kevin Chun-Ho Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 74-79). / Abstract also in Chinese. / Chinese Abstract --- p.v / Abstract --- p.vi / Acknowledgements --- p.vii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Prior Work : Image Segmentation Techniques --- p.3 / Chapter 2.1 --- Introduction to Image Segmentation --- p.4 / Chapter 2.2 --- Region Based Segmentation --- p.5 / Chapter 2.2.1 --- Boundary Based vs Region Based --- p.5 / Chapter 2.2.2 --- Region growing --- p.5 / Chapter 2.2.3 --- Integrating Region Based and Edge Detection --- p.6 / Chapter 2.2.4 --- Watershed Based Methods --- p.8 / Chapter 2.3 --- Fuzzy Set Theory in Segmentation --- p.8 / Chapter 2.3.1 --- Fuzzy Geometry Concept --- p.8 / Chapter 2.3.2 --- Fuzzy C-Means (FCM) Clustering --- p.9 / Chapter 2.4 --- Canny edge filter with contour following --- p.11 / Chapter 2.5 --- Pyramid based Fast Curve Extraction --- p.12 / Chapter 2.6 --- Curve Extraction with Multi-Resolution Fourier transformation --- p.13 / Chapter 2.7 --- User interfaces for Image Segmentation --- p.13 / Chapter 2.7.1 --- Intelligent Scissors --- p.14 / Chapter 2.7.2 --- Magic Wands --- p.16 / Chapter 3 --- Prior Work : Active Contours Model (Snakes) --- p.17 / Chapter 3.1 --- Introduction to Active Contour Model --- p.18 / Chapter 3.2 --- Variants and Extensions of Snakes --- p.19 / Chapter 3.2.1 --- Balloons --- p.20 / Chapter 3.2.2 --- Robust Dual Active Contour --- p.21 / Chapter 3.2.3 --- Gradient Vector Flow Snakes --- p.22 / Chapter 3.2.4 --- Energy Minimization using Dynamic Programming with pres- ence of hard constraints --- p.23 / Chapter 3.3 --- Conclusions --- p.25 / Chapter 4 --- Slimmed Graph --- p.26 / Chapter 4.1 --- BSP-based image analysis --- p.27 / Chapter 4.2 --- Split Line Selection --- p.29 / Chapter 4.3 --- Split Line Selection with Summed Area Table --- p.29 / Chapter 4.4 --- Neighbor blocks --- p.31 / Chapter 4.5 --- Slimmed Graph Generation --- p.32 / Chapter 4.6 --- Time Complexity --- p.35 / Chapter 4.7 --- Results and Conclusions --- p.36 / Chapter 5 --- Fast Intelligent Scissor --- p.38 / Chapter 5.1 --- Background --- p.39 / Chapter 5.2 --- Motivation of Fast Intelligent Scissors --- p.39 / Chapter 5.3 --- Main idea of Fast Intelligent Scissors --- p.40 / Chapter 5.3.1 --- Node position and Cost function --- p.41 / Chapter 5.4 --- Implementation and Results --- p.42 / Chapter 5.5 --- Conclusions --- p.43 / Chapter 6 --- 3D Contour Detection: Volume Cutting --- p.50 / Chapter 6.1 --- Interactive Volume Cutting with the intelligent scissors --- p.51 / Chapter 6.2 --- Contour Selection --- p.52 / Chapter 6.2.1 --- 3D Intelligent Scissors --- p.53 / Chapter 6.2.2 --- Dijkstra's algorithm --- p.54 / Chapter 6.3 --- 3D Volume Cutting --- p.54 / Chapter 6.3.1 --- Cost function for the cutting surface --- p.55 / Chapter 6.3.2 --- "Continuity function (x,y, z) " --- p.59 / Chapter 6.3.3 --- Finding the cutting surface --- p.61 / Chapter 6.3.4 --- Topological problems for the volume cutting --- p.61 / Chapter 6.3.5 --- Assumptions for the well-conditional contour used in our algo- rithm --- p.62 / Chapter 6.4 --- Implementation and Results --- p.64 / Chapter 6.5 --- Conclusions --- p.64 / Chapter 7 --- Conclusions --- p.71 / Chapter 7.1 --- Contributions --- p.71 / Chapter 7.2 --- Future Work --- p.72 / Chapter 7.2.1 --- Real-time interactive tools with Slimmed Graph --- p.72 / Chapter 7.2.2 --- 3D slimmed graph --- p.72 / Chapter 7.2.3 --- Cartoon Film Generation System --- p.72
358

Binocular tone mapping. / 雙目色調映射 / CUHK electronic theses & dissertations collection / Shuang mu se diao ying she

January 2012 (has links)
隨著3D電影和遊戲的蓬勃發展,雙目(立體)顯示設備日益流行,也變得更為廉價。 立體顯示設備 引入了一個額外的圖像空間,使得用於顯示的圖像域翻倍(一個圖像域對應左眼,另一個對應右眼)。 目前的雙目(立體)顯示設備主要把這個額外的圖像空間用於顯示三維立體信息。 / 人們的雙目視覺系統不僅可以把雙眼看到的具有深度差異信息的兩個圖像融合起來,而且可以把兩個在亮度,色彩, 對比度,甚至是內容細節上有一定程度不同的圖像融合到一起,形成一個單一的視界。 這個現象叫做雙眼單視界(Binocular Single Vision)。通過一些列複雜的神經生理融合過成,人們可以通過雙眼單視界比只用任意一隻單眼 觀察到更多視覺內容和信息,其獲得的信息量也多於兩個視野的線性組合。 / 在本畢業論文中,雙眼單視界首次被應用到了計算機圖形學領域,基於這一現象,提出了一個新穎的雙目色調映射框架(Binocular Tone Mapping Framework)。對於輸入的高動態範圍(High-Dynamic Range, HDR)圖像,我們的雙目色調映射 構架將生成一組用於雙目觀看的低動態範圍(Low-Dynamic Range, LDR)圖像對,用以從原HDR圖像中保留 更多的人們可感知到的視覺內容和信息。 給定任意一個指定的色調映射方法,我們的雙目計算框架首先通過使用其默認或者 人工選擇的參數生成一張LDR圖像(不失一般性,我們指定為左視野圖),隨後,圖像對中的另一張LDR圖像 將由系統從同一HDR圖像源使用最優化算法生成。 結果的兩張LDR圖像是不相同的,它們分別保留了不同的視覺信息。通過使用雙目顯示設備,它們可以合計表現出比任一單張LDR圖像更豐富的圖像內容。 / 人們的兩個視野對圖像差異不是無限的,也存在一個容忍度。一旦超過了某個限制閾值,視覺上的不適感覺就會出現。 了避免不適 的產生,我們設計了一個全新的雙目視覺舒適預測預器(Binocular Viewing Comfort predictor)用以預測 雙目視覺的不舒適閾值。 在我們的雙目色調映射構架中,BVCP用於指導LDR圖像對的生成,同時避免觸發 任何視覺不適。 通過一些列的實驗和用戶調查,我們提出的工作框架的有效性以及BVCP預測不適閾值的準確程度都得到了驗證。 / With the booming of 3D movies and video games, binocular (stereo) display devices become more and more popular and affordable. By introducing one additional image space, stereo displays double the image domains for visualization, one for the left eye and the other for the right eye. Existing binocular display systems only utilize this dual image domain for stereopsis. / Our human binocular vision is not only able to fuse two images with disparity, but also two images with difference in luminance, contrast and even detail, into a single percept, up to a certain limit. This phenomenon is known as binocular single vision. By a complicated neurophysiologic fusion process, humans can perceive more visual content via binocular single vision than one arbitrary single view or the linear blending of two views. / In this thesis, for the first time, binocular single vision has been utilized into computer graphics. Based on this phenomenon, a novel binocular tone mapping framework is proposed. From the source high-dynamic range (HDR) image, the proposed framework generates a binoc- ular low-dynamic range (LDR) image pair that preserves more human- perceivable visual content than a single LDR image using the additional image domain. Given a tone mapping method, our framework firstly generates one tone-mapped LDR image (left, without loss of generality) by the default or user selected parameters. Then its counterpart image (right) of the LDR pair is optimally synthesized from the same source HDR image. The two LDR images are not identical, and contain different visual information. Via binocular displays, they can aggregately present more human-perceivable visual richness than a single arbitrary LDR image. / Human binocular vision has a tolerance on the difference between two views. When such limit is exceeded, binocular viewing discomfort appears. To prevent such visual discomfort, a novel binocular view- ing comfort predictor (BVCP) is also proposed to predict the comfort threshold of binocular vision. In our framework, BVCP is used to guide the generation of LDR image pair without triggering visual discomfort. Through several user studies, the effectiveness of the proposed framework in increasing human-perceivable visual richness and the pre- dictability of the proposed BVCP in predicting the binocular discomfort threshold have been demonstrated and validated. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Yang, Xuan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 108-115). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Abstract --- p.i / Acknowledgement --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Study --- p.5 / Chapter 2.1 --- Stereo Display --- p.5 / Chapter 2.2 --- HDR Tone Mapping --- p.9 / Chapter 2.2.1 --- HDR lmage --- p.9 / Chapter 2.2.2 --- Tone Mapping --- p.11 / Chapter 3 --- Binocular Vision --- p.16 / Chapter 3.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.1 --- Binocular Single Vision --- p.16 / Chapter 3.1.2 --- Motor Fusion and Sensory Fusion --- p.19 / Chapter 3.1.3 --- Fusion, Suppression and Rivalry --- p.21 / Chapter 3.1.4 --- Rivalry --- p.23 / Chapter 3.1.5 --- Fusional Theory --- p.24 / Chapter 3.1.6 --- Fusion with Stereopsis --- p.27 / Chapter 3.2 --- Binocular discomfort --- p.29 / Chapter 3.2.1 --- Fusional area --- p.31 / Chapter 3.2.2 --- Contour difference --- p.32 / Chapter 3.2.3 --- Failure of rivalry --- p.33 / Chapter 3.2.4 --- Contour and regional contrast --- p.34 / Chapter 4 --- Binocular Visual Comfort Predictor (BVCP) --- p.37 / Chapter 4.1 --- Introduction --- p.37 / Chapter 4.2 --- Design of BVCP --- p.40 / Chapter 4.2.1 --- Fusional Area --- p.40 / Chapter 4.2.2 --- Contour Fusion --- p.42 / Chapter 4.2.3 --- Failure of Rivalry --- p.48 / Chapter 4.2.4 --- Contour and Regional Contrast --- p.53 / Chapter 4.2.5 --- The Overall Fusion Predictor --- p.54 / Chapter 4.3 --- Experiments and User Study --- p.56 / Chapter 4.4 --- Discussion --- p.60 / Chapter 5 --- Binocular Tone Mapping --- p.62 / Chapter 5.1 --- Introduction --- p.62 / Chapter 5.2 --- Binocular Tone Mapping Framework --- p.66 / Chapter 5.2.1 --- System Overview --- p.66 / Chapter 5.2.2 --- Optimization --- p.68 / Chapter 5.3 --- Experiments and Results --- p.71 / Chapter 5.4 --- Userstudy --- p.77 / Chapter 5.4.1 --- Visual Richness --- p.77 / Chapter 5.4.2 --- Binocular Symmetry --- p.81 / Chapter 5.5 --- Discussion --- p.82 / Chapter 5.5.1 --- Incorporating Stereopsis --- p.82 / Chapter 5.5.2 --- Limitation --- p.84 / Chapter 5.5.3 --- Extension --- p.85 / Chapter 6 --- Conclusion --- p.91 / Chapter 6.1 --- Contribution --- p.91 / Chapter 6.2 --- Future Work --- p.92 / Chapter A --- More Results of Binocular Tone Mapping --- p.94 / Chapter B --- Test Sequence for BVCP --- p.103 / Bibliography --- p.108
359

Dynamics and control of a tilting three wheeled vehicle

Berote, Johan J. H. January 2010 (has links)
No description available.
360

An Inertial-Optical Tracking System for Quantitative, Freehand, 3D Ultrasound

Goldsmith, Abraham Myron 16 January 2009 (has links)
Three dimensional (3D) ultrasound has become an increasingly popular medical imaging tool over the last decade. It offers significant advantages over Two Dimensional (2D) ultrasound, such as improved accuracy, the ability to display image planes that are physically impossible with 2D ultrasound, and reduced dependence on the skill of the sonographer. Among 3D medical imaging techniques, ultrasound is the only one portable enough to be used by first responders, on the battlefield, and in rural areas. There are three basic methods of acquiring 3D ultrasound images. In the first method, a 2D array transducer is used to capture a 3D volume directly, using electronic beam steering. This method is mainly used for echocardiography. In the second method, a linear array transducer is mechanically actuated, giving a slower and less expensive alternative to the 2D array. The third method uses a linear array transducer that is moved by hand. This method is known as freehand 3D ultrasound. Whether using a 2D array or a mechanically actuated linear array transducer, the position and orientation of each image is known ahead of time. This is not the case for freehand scanning. To reconstruct a 3D volume from a series of 2D ultrasound images, assumptions must be made about the position and orientation of each image, or a mechanism for detecting the position and orientation of each image must be employed. The most widely used method for freehand 3D imaging relies on the assumption that the probe moves along a straight path with constant orientation and speed. This method requires considerable skill on the part of the sonographer. Another technique uses features within the images themselves to form an estimate of each image's relative location. However, these techniques are not well accepted for diagnostic use because they are not always reliable. The final method for acquiring position and orientation information is to use a six Degree-of-Freedom (6 DoF) tracking system. Commercially available 6 DoF tracking systems use magnetic fields, ultrasonic ranging, or optical tracking to measure the position and orientation of a target. Although accurate, all of these systems have fundamental limitations in that they are relatively expensive and they all require sensors or transmitters to be placed in fixed locations to provide a fixed frame of reference. The goal of the work presented here is to create a probe tracking system for freehand 3D ultrasound that does not rely on any fixed frame of reference. This system tracks the ultrasound probe using only sensors integrated into the probe itself. The advantages of such a system are that it requires no setup before it can be used, it is more portable because no extra equipment is required, it is immune from environmental interference, and it is less expensive than external tracking systems. An ideal tracking system for freehand 3D ultrasound would track in all 6 DoF. However, current sensor technology limits this system to five. Linear transducer motion along the skin surface is tracked optically and transducer orientation is tracked using MEMS gyroscopes. An optical tracking system was developed around an optical mouse sensor to provide linear position information by tracking the skin surface. Two versions were evaluated. One included an optical fiber bundle and the other did not. The purpose of the optical fiber is to allow the system to integrate more easily into existing probes by allowing the sensor and electronics to be mounted away from the scanning end of the probe. Each version was optimized to track features on the skin surface while providing adequate Depth Of Field (DOF) to accept variation in the height of the skin surface. Orientation information is acquired using a 3 axis MEMS gyroscope. The sensor was thoroughly characterized to quantify performance in terms of accuracy and drift. This data provided a basis for estimating the achievable 3D reconstruction accuracy of the complete system. Electrical and mechanical components were designed to attach the sensor to the ultrasound probe in such a way as to simulate its being embedded in the probe itself. An embedded system was developed to perform the processing necessary to translate the sensor data into probe position and orientation estimates in real time. The system utilizes a Microblaze soft core microprocessor and a set of peripheral devices implemented in a Xilinx Spartan 3E field programmable gate array. The Xilinx Microkernel real time operating system performs essential system management tasks and provides a stable software platform for implementation of the inertial tracking algorithm. Stradwin 3D ultrasound software was used to provide a user interface and perform the actual 3D volume reconstruction. Stradwin retrieves 2D ultrasound images from the Terason t3000 portable ultrasound system and communicates with the tracking system to gather position and orientation data. The 3D reconstruction is generated and displayed on the screen of the PC in real time. Stradwin also provides essential system features such as storage and retrieval of data, 3D data interaction, reslicing, manual 3D segmentation, and volume calculation for segmented regions. The 3D reconstruction performance of the system was evaluated by freehand scanning a cylindrical inclusion in a CIRS model 044 ultrasound phantom. Five different motion profiles were used and each profile was repeated 10 times. This entire test regimen was performed twice, once with the optical tracking system using the optical fiber bundle, and once with the optical tracking system without the optical fiber bundle. 3D reconstructions were performed with and without the position and orientation data to provide a basis for comparison. Volume error and surface error were used as the performance metrics. Volume error ranged from 1.3% to 5.3% with tracking information versus 15.6% to 21.9% without for the version of the system without the optical fiber bundle. Volume error ranged from 3.7% to 7.6% with tracking information versus 8.7% to 13.7% without for the version of the system with the optical fiber bundle. Surface error ranged from 0.319 mm RMS to 0.462 mm RMS with tracking information versus 0.678 mm RMS to 1.261 mm RMS without for the version of the system without the optical fiber bundle. Surface error ranged from 0.326 mm RMS to 0.774 mm RMS with tracking information versus 0.538 mm RMS to 1.657 mm RMS without for the version of the system with the optical fiber bundle. The prototype tracking system successfully demonstrated that accurate 3D ultrasound volumes can be generated from 2D freehand data using only sensors integrated into the ultrasound probe. One serious shortcoming of this system is that it only tracks 5 of the 6 degrees of freedom required to perform complete 3D reconstructions. The optical system provides information about linear movement but because it tracks a surface, it cannot measure vertical displacement. Overcoming this limitation is the most obvious candidate for future research using this system. The overall tracking platform, meaning the embedded tracking computer and the PC software, developed and integrated in this work, is ready to take advantage of vertical displacement data, should a method be developed for sensing it.

Page generated in 0.1767 seconds