• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Shape-based image retrieval in iconic image databases.

January 1999 (has links)
by Chan Yuk Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 117-124). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Content-based Image Retrieval --- p.3 / Chapter 1.2 --- Designing a Shape-based Image Retrieval System --- p.4 / Chapter 1.3 --- Information on Trademark --- p.6 / Chapter 1.3.1 --- What is a Trademark? --- p.6 / Chapter 1.3.2 --- Search for Conflicting Trademarks --- p.7 / Chapter 1.3.3 --- Research Scope --- p.8 / Chapter 1.4 --- Information on Chinese Cursive Script Character --- p.9 / Chapter 1.5 --- Problem Definition --- p.9 / Chapter 1.6 --- Contributions --- p.11 / Chapter 1.7 --- Thesis Organization --- p.13 / Chapter 2 --- Literature Review --- p.14 / Chapter 2.1 --- Trademark Retrieval using QBIC Technology --- p.14 / Chapter 2.2 --- STAR --- p.16 / Chapter 2.3 --- ARTISAN --- p.17 / Chapter 2.4 --- Trademark Retrieval using a Visually Salient Feature --- p.18 / Chapter 2.5 --- Trademark Recognition using Closed Contours --- p.19 / Chapter 2.6 --- Trademark Retrieval using a Two Stage Hierarchy --- p.19 / Chapter 2.7 --- Logo Matching using Negative Shape Features --- p.21 / Chapter 2.8 --- Chapter Summary --- p.22 / Chapter 3 --- Background on Shape Representation and Matching --- p.24 / Chapter 3.1 --- Simple Geometric Features --- p.25 / Chapter 3.1.1 --- Circularity --- p.25 / Chapter 3.1.2 --- Rectangularity --- p.26 / Chapter 3.1.3 --- Hole Area Ratio --- p.27 / Chapter 3.1.4 --- Horizontal Gap Ratio --- p.27 / Chapter 3.1.5 --- Vertical Gap Ratio --- p.28 / Chapter 3.1.6 --- Central Moments --- p.28 / Chapter 3.1.7 --- Major Axis Orientation --- p.29 / Chapter 3.1.8 --- Eccentricity --- p.30 / Chapter 3.2 --- Fourier Descriptors --- p.30 / Chapter 3.3 --- Chain Codes --- p.31 / Chapter 3.4 --- Seven Invariant Moments --- p.33 / Chapter 3.5 --- Zernike Moments --- p.35 / Chapter 3.6 --- Edge Direction Histogram --- p.36 / Chapter 3.7 --- Curvature Scale Space Representation --- p.37 / Chapter 3.8 --- Chapter Summary --- p.39 / Chapter 4 --- Genetic Algorithm for Weight Assignment --- p.42 / Chapter 4.1 --- Genetic Algorithm (GA) --- p.42 / Chapter 4.1.1 --- Basic Idea --- p.43 / Chapter 4.1.2 --- Genetic Operators --- p.44 / Chapter 4.2 --- Why GA? --- p.45 / Chapter 4.3 --- Weight Assignment Problem --- p.46 / Chapter 4.3.1 --- Integration of Image Attributes --- p.46 / Chapter 4.4 --- Proposed Solution --- p.47 / Chapter 4.4.1 --- Formalization --- p.47 / Chapter 4.4.2 --- Proposed Genetic Algorithm --- p.43 / Chapter 4.5 --- Chapter Summary --- p.49 / Chapter 5 --- Shape-based Trademark Image Retrieval System --- p.50 / Chapter 5.1 --- Problems on Existing Methods --- p.50 / Chapter 5.1.1 --- Edge Direction Histogram --- p.51 / Chapter 5.1.2 --- Boundary Based Techniques --- p.52 / Chapter 5.2 --- Proposed Solution --- p.53 / Chapter 5.2.1 --- Image Preprocessing --- p.53 / Chapter 5.2.2 --- Automatic Feature Extraction --- p.54 / Chapter 5.2.3 --- Approximated Boundary --- p.55 / Chapter 5.2.4 --- Integration of Shape Features and Query Processing --- p.58 / Chapter 5.3 --- Experimental Results --- p.58 / Chapter 5.3.1 --- Experiment 1: Weight Assignment using Genetic Algorithm --- p.59 / Chapter 5.3.2 --- Experiment 2: Speed on Feature Extraction and Retrieval --- p.62 / Chapter 5.3.3 --- Experiment 3: Evaluation by Precision --- p.63 / Chapter 5.3.4 --- Experiment 4: Evaluation by Recall for Deformed Images --- p.64 / Chapter 5.3.5 --- Experiment 5: Evaluation by Recall for Hand Drawn Query Trademarks --- p.66 / Chapter 5.3.6 --- "Experiment 6: Evaluation by Recall for Rotated, Scaled and Mirrored Images" --- p.66 / Chapter 5.3.7 --- Experiment 7: Comparison of Different Integration Methods --- p.68 / Chapter 5.4 --- Chapter Summary --- p.71 / Chapter 6 --- Shape-based Chinese Cursive Script Character Image Retrieval System --- p.72 / Chapter 6.1 --- Comparison to Trademark Retrieval Problem --- p.79 / Chapter 6.1.1 --- Feature Selection --- p.73 / Chapter 6.1.2 --- Speed of System --- p.73 / Chapter 6.1.3 --- Variation of Style --- p.73 / Chapter 6.2 --- Target of the Research --- p.74 / Chapter 6.3 --- Proposed Solution --- p.75 / Chapter 6.3.1 --- Image Preprocessing --- p.75 / Chapter 6.3.2 --- Automatic Feature Extraction --- p.76 / Chapter 6.3.3 --- Thinned Image and Linearly Normalized Image --- p.76 / Chapter 6.3.4 --- Edge Directions --- p.77 / Chapter 6.3.5 --- Integration of Shape Features --- p.78 / Chapter 6.4 --- Experimental Results --- p.79 / Chapter 6.4.1 --- Experiment 8: Weight Assignment using Genetic Algorithm --- p.79 / Chapter 6.4.2 --- Experiment 9: Speed on Feature Extraction and Retrieval --- p.81 / Chapter 6.4.3 --- Experiment 10: Evaluation by Recall for Deformed Images --- p.82 / Chapter 6.4.4 --- Experiment 11: Evaluation by Recall for Rotated and Scaled Images --- p.83 / Chapter 6.4.5 --- Experiment 12: Comparison of Different Integration Methods --- p.85 / Chapter 6.5 --- Chapter Summary --- p.87 / Chapter 7 --- Conclusion --- p.88 / Chapter 7.1 --- Summary --- p.88 / Chapter 7.2 --- Future Research --- p.89 / Chapter 7.2.1 --- Limitations --- p.89 / Chapter 7.2.2 --- Future Directions --- p.90 / Chapter A --- A Representative Subset of Trademark Images --- p.91 / Chapter B --- A Representative Subset of Cursive Script Character Images --- p.93 / Chapter C --- Shape Feature Extraction Toolbox for Matlab V53 --- p.95 / Chapter C.l --- central .moment --- p.95 / Chapter C.2 --- centroid --- p.96 / Chapter C.3 --- cir --- p.96 / Chapter C.4 --- ess --- p.97 / Chapter C.5 --- css_match --- p.100 / Chapter C.6 --- ecc --- p.102 / Chapter C.7 --- edge一directions --- p.102 / Chapter C.8 --- fourier-d --- p.105 / Chapter C.9 --- gen_shape --- p.106 / Chapter C.10 --- hu7 --- p.108 / Chapter C.11 --- isclockwise --- p.109 / Chapter C.12 --- moment --- p.110 / Chapter C.13 --- normalized-moment --- p.111 / Chapter C.14 --- orientation --- p.111 / Chapter C.15 --- resample-pts --- p.112 / Chapter C.16 --- rectangularity --- p.113 / Chapter C.17 --- trace-points --- p.114 / Chapter C.18 --- warp-conv --- p.115 / Bibliography --- p.117
402

ADVISE: advanced digital video information segmentation engine.

January 2002 (has links)
by Chung-Wing Ng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 100-107). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgment --- p.vi / Table of Contents --- p.vii / List of Tables --- p.x / List of Figures --- p.xi / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Image-based Video Description --- p.2 / Chapter 1.2 --- Video Summary --- p.5 / Chapter 1.3 --- Video Matching --- p.6 / Chapter 1.4 --- Contributions --- p.7 / Chapter 1.5 --- Outline of Thesis --- p.8 / Chapter Chapter 2 --- Literature Review --- p.10 / Chapter 2.1 --- Video Retrieval in Digital Video Libraries --- p.11 / Chapter 2.1.1 --- The VISION Project --- p.11 / Chapter 2.1.2 --- The INFORMEDIA Project --- p.12 / Chapter 2.1.3 --- Discussion --- p.13 / Chapter 2.2 --- Video Structuring --- p.14 / Chapter 2.2.1 --- Video Segmentation --- p.16 / Chapter 2.2.2 --- Color histogram Extraction --- p.17 / Chapter 2.2.3 --- Further Structuring --- p.18 / Chapter 2.3 --- XML Technologies --- p.19 / Chapter 2.3.1 --- XML Syntax --- p.20 / Chapter 2.3.2 --- "Document Type Definition, DTD" --- p.21 / Chapter 2.3.3 --- "Extensible Stylesheet Language, XSL" --- p.21 / Chapter 2.4 --- SMIL Technology --- p.22 / Chapter 2.4.1 --- SMIL Syntax --- p.23 / Chapter 2.4.2 --- Model of SMIL Applications --- p.23 / Chapter Chapter 3 --- Overview of ADVISE --- p.25 / Chapter 3.1 --- Objectives --- p.26 / Chapter 3.2 --- System Architecture --- p.26 / Chapter 3.2.1 --- Video Preprocessing Module --- p.26 / Chapter 3.2.2 --- Web-based Video Retrieval Module --- p.30 / Chapter 3.2.3 --- Video Streaming Server --- p.34 / Chapter 3.3 --- Summary --- p.35 / Chapter Chapter 4 --- Construction of Video Table-of-Contents (V-ToC) --- p.36 / Chapter 4.1 --- Video Structuring --- p.37 / Chapter 4.1.1 --- Terms and Definitions --- p.37 / Chapter 4.1.2 --- Regional Color Histograms --- p.39 / Chapter 4.1.3 --- Video Shot Boundaries Detection --- p.43 / Chapter 4.1.4 --- Video Groups Formation --- p.47 / Chapter 4.1.5 --- Video Scenes Formation --- p.50 / Chapter 4.2 --- Storage and Presentation --- p.53 / Chapter 4.2.1 --- Definition of XML Video Structure --- p.54 / Chapter 4.2.2 --- V-ToC Presentation Using XSL --- p.55 / Chapter 4.3 --- Evaluation of Video Structure --- p.58 / Chapter Chapter 5 --- Video Summarization --- p.62 / Chapter 5.1 --- Terms and Definitions --- p.64 / Chapter 5.2 --- Video Features Used for Summarization --- p.65 / Chapter 5.3 --- Video Summarization Algorithm --- p.67 / Chapter 5.3.1 --- Combining Extracted Video Segments --- p.68 / Chapter 5.3.2 --- Scoring the Extracted Video Segments --- p.69 / Chapter 5.3.3 --- Selecting Extracted Video Segments --- p.70 / Chapter 5.3.4 --- Refining the Selection Result --- p.71 / Chapter 5.4 --- Video Summary in SMIL --- p.74 / Chapter 5.5 --- Evaluations --- p.76 / Chapter 5.5.1 --- Experiment 1: Percentages of Features Extracted --- p.76 / Chapter 5.5.2 --- Experiment 2: Evaluation of the Refinement Process --- p.78 / Chapter Chapter 6 --- Video Matching Using V-ToC --- p.80 / Chapter 6.1 --- Terms and Definitions --- p.81 / Chapter 6.2 --- Video Features Used for Matching --- p.82 / Chapter 6.3 --- Non-ordered Tree Matching Algorithm --- p.83 / Chapter 6.4 --- Ordered Tree Matching Algorithms --- p.87 / Chapter 6.5 --- Evaluation of Video Matching --- p.91 / Chapter 6.5.1 --- Applying Non-ordered Tree Matching --- p.92 / Chapter 6.5.2 --- Applying Ordered Tree Matching --- p.94 / Chapter Chapter 7 --- Conclusion --- p.96 / Bibliography --- p.100
403

Combinação de dispositivos de baixo custo para rastreamento de gestos /

Agostinho, Isabele Andreoli. January 2014 (has links)
Acompanha 1 CD-ROM / Orientador: José Remo Ferreira Brega / Banca: Ildeberto Aparecido Rodello / Banca: Aparecido Nilceu Marana / Resumo: Algumas pesquisas mostram que a combinação de mais de uma tecnologia de sensor pode melhorar o rastreamento de movimentos, tornando-o mais preciso ou permitindo a implementação de aplicações que usam movimentos complexos, como nas línguas de sinais por exemplo. A combinação de dispositivos de rastreamento de movimentos vendidos comercialmente permite desenvolver sistemas de baixo custo e de fácil utilização. O Kinect, o Wii Remote e a 5DT Data Glove Ultra são dispositivos que usam tecnologias que fornecem informações complementares de rastreamento de braços e mãos, são fáceis de usar, têm baixo custo e possuem bibliotecas de desenvolvimento gratuitas, entre outras vantagens. Para avaliar a combinação desses dispositivos para rastreamento de gestos, foi desenvolvido um sistema de rastreamento que contém dois módulos principais, um de tratamento dos dispositivos, com inicialização e junção dos movimentos, e outro com a visualização da movimentação do Humano Virtual para o rastreamento feito. Este sistema utiliza a luva para a captura da configuração das mãos, o Wii Remote para fornecer a rotação dos antebraços e o Kinect para o rastreamento dos braços e da inclinação dos antebraços. Foram executados testes para vários movimentos, e os resultados obtidos relativos a cada dispositivo foram tratados e o rastreamento reproduzido em tempo real no Humano Virtual com sucesso / Abstract: Some researches show that combination of more than one sensor technology can improve tracking, making it more precise or making possible the development of systems that use complex movements, such as in sign languages. The combination of commercial tracking devices allows the development of low cost and easy to use systems. The Kinect, the Wii Remote and the 5DT Data Glove Ultra are devices that use technologies that give complementary information of arms and hands tracking, are easy to manipulate, have low cost and free development tools, among other advantages. To evaluate the combination of these devices for human communication gesture tracking, a system was developed having two main modules, one for device processing with initialization and movements union, and other that provides the visualization of Virtual Human movements of executed tracking. This system use the glove to provide hands configuration capture, the Wii Remote to give forearms rotation and a Kinect to track arms and forearms pitch. Tests were done for different movements, and the results of each devices data were processed and tracked movement was displayed by the Virtual Human in real time successfully / Mestre
404

Reconstruction from projections based on detection and estimation of objects

Rossi, David John January 1982 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 336-341. / by David John Rossi. / Ph.D.
405

High-Speed Wide-Field Time-Correlated Single-Photon Counting Fluorescence Lifetime Imaging Microscopy

Field, Ryan Michael January 2014 (has links)
Fluorescence microscopy is a powerful imaging technique used in the biological sciences to identify labeled components of a sample with specificity. This is usually accomplished through labeling with fluorescent dyes, isolating these dyes by their spectral signatures with optical filters, and recording the intensity of the fluorescent response. Although these techniques are widely used, fluorescence intensity images can be negatively affected by a variety of factors that impact the fluorescence intensity. Fluorescence lifetime imaging microscopy (FLIM) is an imaging technique that is relatively immune to intensity fluctuations and also provides the unique ability to directly monitor the microenvironment surrounding a fluorophore. Despite the benefits associated with FLIM, the applications to which it is applied are fairly limited due to long image acquisition times and high cost of traditional hardware. Recent advances in complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diodes (SPADs) have enabled the design of low-cost imaging arrays that are capable of recording lifetime images with acquisition times greater than one order of magnitude faster than existing systems. However, these SPAD arrays have yet to realize the full potential of the technology due to limitations in their ability to handle the vast amount of data generated during the commonly used time-correlated single-photon counting (TCSPC) lifetime imaging technique. This thesis presents the design, implementation, characterization, and demonstration of a high speed FLIM imaging system. The components of this design include a CMOS imager chip in a standard 0.13 μm technology containing a custom CMOS SPAD, a 64-by-64 array of these SPADs, pixel control circuitry, independent time-to-digital converters (TDCs), a FLIM specific datapath, and high bandwidth output buffers. In addition to the CMOS imaging array, a complete system was designed and implemented using a printed circuit board (PCB) for capturing data from the imager, creating histograms for the photon arrival data using field-programmable gate arrays, and transferring the data to a computer using a cabled PCIe interface. Finally, software is used to communicate between the imaging system and a computer.The dark count rate of the SPAD was measured to be only 231 Hz at room temperature while maintaining a photon detection probability of up to 30\%. TDCs included on the array have a 62.5 ps resolution and a 64 ns range, which is suitable for measuring the lifetime of most biological fluorophores. Additionally, the on-chip datapath was designed to handle continuous data transfers at rates capable of supporting TCSPC-based lifetime imaging at 100 frames per second. The system level implementation also provides sufficient data throughput for transferring up to 750 frames per second from the imaging system to a computer. The lifetime imaging system was characterized using standard techniques for evaluating SPAD performance and an electrical delay signal for measuring the TDC performance. This thesis concludes with a demonstration of TCSPC-FLIM imaging at 100 frames per second -- the fastest 64-by-64 TCSPC FLIM that has been demonstrated. This system overcomes some of the limitations of existing FLIM systems and has the potential to enable new application domains in dynamic FLIM imaging.
406

Digital photo album management techniques: from one dimension to multi-dimension.

January 2005 (has links)
Lu Yang. / Thesis submitted in: November 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 96-103). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Our Contributions --- p.3 / Chapter 1.3 --- Thesis Outline --- p.5 / Chapter 2 --- Background Study --- p.7 / Chapter 2.1 --- MPEG-7 Introduction --- p.8 / Chapter 2.2 --- Image Analysis in CBIR Systems --- p.11 / Chapter 2.2.1 --- Color Information --- p.13 / Chapter 2.2.2 --- Color Layout --- p.19 / Chapter 2.2.3 --- Texture Information --- p.20 / Chapter 2.2.4 --- Shape Information --- p.24 / Chapter 2.2.5 --- CBIR Systems --- p.26 / Chapter 2.3 --- Image Processing in JPEG Frequency Domain --- p.30 / Chapter 2.4 --- Photo Album Clustering --- p.33 / Chapter 3 --- Feature Extraction and Similarity Analysis --- p.38 / Chapter 3.1 --- Feature Set in Frequency Domain --- p.38 / Chapter 3.1.1 --- JPEG Frequency Data --- p.39 / Chapter 3.1.2 --- Our Feature Set --- p.42 / Chapter 3.2 --- Digital Photo Similarity Analysis --- p.43 / Chapter 3.2.1 --- Energy Histogram --- p.43 / Chapter 3.2.2 --- Photo Distance --- p.45 / Chapter 4 --- 1-Dimensional Photo Album Management Techniques --- p.49 / Chapter 4.1 --- Photo Album Sorting --- p.50 / Chapter 4.2 --- Photo Album Clustering --- p.52 / Chapter 4.3 --- Photo Album Compression --- p.56 / Chapter 4.3.1 --- Variable IBP frames --- p.56 / Chapter 4.3.2 --- Adaptive Search Window --- p.57 / Chapter 4.3.3 --- Compression Flow --- p.59 / Chapter 4.4 --- Experiments and Performance Evaluations --- p.60 / Chapter 5 --- High Dimensional Photo Clustering --- p.67 / Chapter 5.1 --- Traditional Clustering Techniques --- p.67 / Chapter 5.1.1 --- Hierarchical Clustering --- p.68 / Chapter 5.1.2 --- Traditional K-means --- p.71 / Chapter 5.2 --- Multidimensional Scaling --- p.74 / Chapter 5.2.1 --- Introduction --- p.75 / Chapter 5.2.2 --- Classical Scaling --- p.77 / Chapter 5.3 --- Our Interactive MDS-based Clustering --- p.80 / Chapter 5.3.1 --- Principal Coordinates from MDS --- p.81 / Chapter 5.3.2 --- Clustering Scheme --- p.82 / Chapter 5.3.3 --- Layout Scheme --- p.84 / Chapter 5.4 --- Experiments and Results --- p.87 / Chapter 6 --- Conclusions --- p.94 / Bibliography --- p.96
407

Feature based object rendering from sparse views. / CUHK electronic theses & dissertations collection

January 2011 (has links)
The first part of this thesis presents a convenient and flexible calibration method to estimate the relative rotation and translation among multiple cameras. A simple planar pattern is used for accurate calibration and is not required to be simultaneously observed by all cameras. Thus the method is especially suitable for widely spaced camera array. In order to fairly evaluate the calibration results for different camera setups, a novel accuracy metric is introduced based on the deflection angles of projection rays, which is insensitive to a number of setup factors. / The objective of this thesis is to develop a multiview system that can synthesize photorealistic novel views of the scene captured by sparse cameras distributed in a wide area. The system cost is largely reduced due to the small number of required cameras, and the image capture is greatly facilitated because the cameras are allowed to be widely spaced and flexibly placed. The key techniques to achieve this goal are investigated in this thesis. / Cui, Chunhui. / "November 2010." / Adviser: Ngan King Ngi. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 140-155). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
408

Estimation of 3D wireframe face models from movies. / 電影中三維人面模型之估計 / Estimation of 3D wireframe face models from movies. / Dian ying zhong san wei ren mian mo xing zhi gu ji

January 2003 (has links)
Tang Yuk Ming = 電影中三維人面模型之估計 / 鄧育明. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 107-113). / Text in English; abstracts in English and Chinese. / Tang Yuk Ming = Dian ying zhong san wei ren mian mo xing zhi gu ji / Deng Yuming. / Acknowledgement --- p.i / Abstract --- p.ii / Contents --- p.vi / List of Figures --- p.viii / List of Tables --- p.x / List of Abbreviations and Notations --- p.xi / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Recent Research Works --- p.2 / Chapter 1.2.1 --- Face modeling from images --- p.2 / Chapter 1.2.2 --- Pose estimation --- p.4 / Chapter 1.3 --- Objectives and Assumptions --- p.7 / Chapter 1.4 --- Our Method --- p.8 / Chapter 1.5 --- Thesis Outline --- p.10 / Chapter 2. --- Basic Theory on 3D Modeling of a Head --- p.11 / Chapter 2.1 --- Introduction --- p.11 / Chapter 2.2 --- Perspective Projection --- p.13 / Chapter 2.3 --- Initialization --- p.17 / Chapter 2.3.1 --- Generic wireframe face model and fiducial points --- p.17 / Chapter 2.3.2 --- Deformations --- p.22 / Chapter 2.3.3 --- Experimental results --- p.35 / Chapter 2.4 --- Summary --- p.39 / Chapter 3. --- Pose Estimation --- p.40 / Chapter 3.1 --- Introduction --- p.40 / Chapter 3.2 --- Problem Description --- p.42 / Chapter 3.3 --- Iterative Least-Square Minimization --- p.45 / Chapter 3.3.1 --- Depth estimation --- p.45 / Chapter 3.3.2 --- Least-square minimization --- p.47 / Chapter 3.3.3 --- Iterative process --- p.52 / Chapter 3.4 --- Experimental Results --- p.54 / Chapter 3.4.1 --- Synthetic data --- p.54 / Chapter 3.4.2 --- Real data --- p.65 / Chapter 3.5 --- Summary --- p.69 / Chapter 4. --- 3D Wireframe Model Estimation --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- 3D Wireframe Model Estimation --- p.72 / Chapter 4.2.1 --- Least-square minimization --- p.73 / Chapter 4.2.2 --- Iterative process --- p.74 / Chapter 4.3 --- 3D Wireframe Model Estimation of the Subsequent Frames --- p.77 / Chapter 4.4 --- Experimental Results --- p.78 / Chapter 4.4.1 --- Synthetic data --- p.78 / Chapter 4.4.2 --- Real data --- p.84 / Chapter 4.5 --- Summary --- p.98 / Chapter 5. --- Contributions and Conclusions --- p.99 / Chapter 5.1 --- Contributions and conclusions --- p.99 / Chapter 5.2 --- Future Developments --- p.102 / Appendix A Triangles and vertices on the IST model --- p.104 / Bibliography --- p.107
409

3D object retrieval and recognition. / Three-dimensional object retrieval and recognition

January 2010 (has links)
Gong, Boqing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 53-59). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- 3D Object Representation --- p.1 / Chapter 1.1.1 --- Polygon Mesh --- p.2 / Chapter 1.1.2 --- Voxel --- p.2 / Chapter 1.1.3 --- Range Image --- p.3 / Chapter 1.2 --- Content-Based 3D Object Retrieval --- p.3 / Chapter 1.3 --- 3D Facial Expression Recognition --- p.4 / Chapter 1.4 --- Contributions --- p.5 / Chapter 2 --- 3D Object Retrieval --- p.6 / Chapter 2.1 --- A Conceptual Framework for 3D Object Retrieval --- p.6 / Chapter 2.1.1 --- Query Formulation and User Interface --- p.7 / Chapter 2.1.2 --- Canonical Coordinate Normalization --- p.8 / Chapter 2.1.3 --- Representations of 3D Objects --- p.10 / Chapter 2.1.4 --- Performance Evaluation --- p.11 / Chapter 2.2 --- Public Databases --- p.13 / Chapter 2.2.1 --- Databases of Generic 3D Objects --- p.14 / Chapter 2.2.2 --- A Database of Articulated Objects --- p.15 / Chapter 2.2.3 --- Domain-Specific Databases --- p.15 / Chapter 2.2.4 --- Data Sets for the Shrec Contest --- p.16 / Chapter 2.3 --- Experimental Systems --- p.16 / Chapter 2.4 --- Challenges in 3D Object Retrieval --- p.17 / Chapter 3 --- Boosting 3D Object Retrieval by Object Flexibility --- p.19 / Chapter 3.1 --- Related Work --- p.19 / Chapter 3.2 --- Object Flexibility --- p.21 / Chapter 3.2.1 --- Definition --- p.21 / Chapter 3.2.2 --- Computation of the Flexibility --- p.22 / Chapter 3.3 --- A Flexibility Descriptor for 3D Object Retrieval --- p.24 / Chapter 3.4 --- Enhancing Existing Methods --- p.25 / Chapter 3.5 --- Experiments --- p.26 / Chapter 3.5.1 --- Retrieving Articulated Objects --- p.26 / Chapter 3.5.2 --- Retrieving Generic Objects --- p.27 / Chapter 3.5.3 --- Experiments on Larger Databases --- p.28 / Chapter 3.5.4 --- Comparison of Times for Feature Extraction --- p.31 / Chapter 3.6 --- Conclusions & Analysis --- p.31 / Chapter 4 --- 3D Object Retrieval with Referent Objects --- p.32 / Chapter 4.1 --- 3D Object Retrieval with Prior --- p.32 / Chapter 4.2 --- 3D Object Retrieval with Referent Objects --- p.34 / Chapter 4.2.1 --- Natural and Man-made 3D Object Classification --- p.35 / Chapter 4.2.2 --- Inferring Priors Using 3D Object Classifier --- p.36 / Chapter 4.2.3 --- Reducing False Positives --- p.37 / Chapter 4.3 --- Conclusions and Future Work --- p.38 / Chapter 5 --- 3D Facial Expression Recognition --- p.39 / Chapter 5.1 --- Introduction --- p.39 / Chapter 5.2 --- Separation of BFSC and ESC --- p.43 / Chapter 5.2.1 --- 3D Face Alignment --- p.43 / Chapter 5.2.2 --- Estimation of BFSC --- p.44 / Chapter 5.3 --- Expressional Regions and an Expression Descriptor --- p.45 / Chapter 5.4 --- Experiments --- p.47 / Chapter 5.4.1 --- Testing the Ratio of Preserved Energy in the BFSC Estimation --- p.47 / Chapter 5.4.2 --- Comparison with Related Work --- p.48 / Chapter 5.5 --- Conclusions --- p.50 / Chapter 6 --- Conclusions --- p.51 / Bibliography --- p.53
410

Comparison of object and pixel-based classifications for land-use and land cover mapping in the mountainous Mokhotlong District of Lesotho using high spatial resolution imagery

Gegana, Mpho January 2016 (has links)
Research Report submitted in partial fulfilment for the degree of Master of Science (Geographical Information Systems and Remote Sensing) School of Geography, Archaeology and Environmental Studies, University of the Witwatersrand, Johannesburg. August 2016. / The thematic classification of land use and land cover (LULC) from remotely sensed imagery data is one of the most common research branches of applied remote sensing sciences. The performances of the pixel-based image analysis (PBIA) and object-based image analysis (OBIA) Support Vector Machine (SVM) learning algorithms were subjected to comparative assessment using WorldView-2 and SPOT-6 multispectral images of the Mokhotlong District in Lesotho covering approximately an area of 100 km2. For this purpose, four LULC classification models were developed using the combination of SVM –based image analysis approach (i.e. OBIA and/or PBIA) on high resolution images (WorldView-2 and/or SPOT-6) and the results were subjected to comparisons with one another. Of the four LULC models, the OBIA and WorldView-2 model (overall accuracy 93.2%) was found to be more appropriate and reliable for remote sensing application purposes in this environment. The OBIA-WorldView-2 LULC model was subjected to spatial overlay analysis with DEM derived topographic variables in order to evaluate the relationship between the spatial distribution of LULC types and topography, particularly for topographically-controlled patterns. It was discovered that although that there are traces of the relationship between the LULC types distributions and topography, it was significantly convoluted due to both natural and anthropogenic forces such that the topographic-induced patterns for most of the LULC types had been substantial disrupted. / LG2017

Page generated in 0.155 seconds