• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Video modeling via implicit motion representations

Zheng, Yunfei, January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2008. / Title from document title page. Document formatted into pages; contains xii, 106 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 100-106).
2

Motion detection and correction in magnetic resonance imaging : a thesis presented for the degree of Doctor of Philosophy, University of Canterbury, Christchurch, New Zealand /

Maclaren, Julian R. January 1900 (has links)
Thesis (Ph. D.)--University of Canterbury, 2007. / Typescript (photocopy). "October 2007." Includes bibliographical references (leaves 159-171). Also available via the World Wide Web.
3

Layered deformotion with radiance a model for appearance, segmentation, registration, and tracking /

Jackson, Jeremy D. January 2007 (has links)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008. / Vela, Patricio, Committee Member ; Tannenbaum, Allen, Committee Member ; Yezzi, Anthony, Committee Chair ; Turk, Greg, Committee Member ; Lanterman, Aaron, Committee Member.
4

Novel frameworks for deformable model and nonrigid motion analysis

Li, Min. January 2005 (has links)
Thesis (Ph.D.)--University of Delaware, 2005. / Principal faculty advisor: Chandra Kambhamettu, Dept. of Computer and Information Sciences. Includes bibliographical references.
5

Deep learning based facial expression recognition and its applications

Jan, Asim January 2017 (has links)
Facial expression recognition (FER) is a research area that consists of classifying the human emotions through the expressions on their face. It can be used in applications such as biometric security, intelligent human-computer interaction, robotics, and clinical medicine for autism, depression, pain and mental health problems. This dissertation investigates the advanced technologies for facial expression analysis and develops the artificial intelligent systems for practical applications. The first part of this work applies geometric and texture domain feature extractors along with various machine learning techniques to improve FER. Advanced 2D and 3D facial processing techniques such as Edge Oriented Histograms (EOH) and Facial Mesh Distances (FMD) are then fused together using a framework designed to investigate their individual and combined domain performances. Following these tests, the face is then broken down into facial parts using advanced facial alignment and localising techniques. Deep learning in the form of Convolutional Neural Networks (CNNs) is also explored also FER. A novel approach is used for the deep network architecture design, to learn the facial parts jointly, showing an improvement over using the whole face. Joint Bayesian is also adapted in the form of metric learning, to work with deep feature representations of the facial parts. This provides a further improvement over using the deep network alone. Dynamic emotion content is explored as a solution to provide richer information than still images. The motion occurring across the content is initially captured using the Motion History Histogram descriptor (MHH) and is critically evaluated. Based on this observation, several improvements are proposed through extensions such as Average Spatial Pooling Multi-scale Motion History Histogram (ASMMHH). This extension adds two modifications, first is to view the content in different spatial dimensions through spatial pooling; influenced by the structure of CNNs. The other modification is to capture motion at different speeds. Combined, they have provided better performance over MHH, and other popular techniques like Local Binary Patterns - Three Orthogonal Planes (LBP-TOP). Finally, the dynamic emotion content is observed in the feature space, with sequences of images represented as sequences of extracted features. A novel technique called Facial Dynamic History Histogram (FDHH) is developed to capture patterns of variations within the sequence of features; an approach not seen before. FDHH is applied in an end to end framework for applications in Depression analysis and evaluating the induced emotions through a large set of video clips from various movies. With the combination of deep learning techniques and FDHH, state-of-the-art results are achieved for Depression analysis.

Page generated in 0.1148 seconds