• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Faulkner a pohyb: analýza pohybu v románu Hluk a zuřivost / Motion in Faulkner: An Analysis of Movement in The Sound and the Fury

Hesová, Petra January 2012 (has links)
The Gothic is an extremely viable mode in the history of American literature. As a genre concerned principally with distortions and aberrations, it provides a platform for writers to voice their concerns about periods of transformation and destabilized boundaries. William Faulkner, one of the leading authors of the American South, frequently employs the Gothic mode in his portrayals of the South as a traumatized region trying to cope with the echoes of the Civil War and with the disintegration of old aristocratic values, which manifests itself in the decay of institutions (such as the family) as well as a collapse of individual minds. This emphasis on the human psyche is evident especially in the novel The Sound and the Fury, whose main characters and narrators are representatives of the various extremities of the human psyche (severe mental retardation, suicidal tendencies, schizophrenia and paranoia). Faulkner's use of the Gothic mode is rather unorthodox and innovative, employing inversions and parody which can be appropriately demonstrated by the category of motion and his use of the traditional Gothic devices and character types. The traditional motion patterns - flight and pursuit, quest and purposeless wandering - that are originally connected predominantly with only one Gothic type (the...
2

Deep learning based facial expression recognition and its applications

Jan, Asim January 2017 (has links)
Facial expression recognition (FER) is a research area that consists of classifying the human emotions through the expressions on their face. It can be used in applications such as biometric security, intelligent human-computer interaction, robotics, and clinical medicine for autism, depression, pain and mental health problems. This dissertation investigates the advanced technologies for facial expression analysis and develops the artificial intelligent systems for practical applications. The first part of this work applies geometric and texture domain feature extractors along with various machine learning techniques to improve FER. Advanced 2D and 3D facial processing techniques such as Edge Oriented Histograms (EOH) and Facial Mesh Distances (FMD) are then fused together using a framework designed to investigate their individual and combined domain performances. Following these tests, the face is then broken down into facial parts using advanced facial alignment and localising techniques. Deep learning in the form of Convolutional Neural Networks (CNNs) is also explored also FER. A novel approach is used for the deep network architecture design, to learn the facial parts jointly, showing an improvement over using the whole face. Joint Bayesian is also adapted in the form of metric learning, to work with deep feature representations of the facial parts. This provides a further improvement over using the deep network alone. Dynamic emotion content is explored as a solution to provide richer information than still images. The motion occurring across the content is initially captured using the Motion History Histogram descriptor (MHH) and is critically evaluated. Based on this observation, several improvements are proposed through extensions such as Average Spatial Pooling Multi-scale Motion History Histogram (ASMMHH). This extension adds two modifications, first is to view the content in different spatial dimensions through spatial pooling; influenced by the structure of CNNs. The other modification is to capture motion at different speeds. Combined, they have provided better performance over MHH, and other popular techniques like Local Binary Patterns - Three Orthogonal Planes (LBP-TOP). Finally, the dynamic emotion content is observed in the feature space, with sequences of images represented as sequences of extracted features. A novel technique called Facial Dynamic History Histogram (FDHH) is developed to capture patterns of variations within the sequence of features; an approach not seen before. FDHH is applied in an end to end framework for applications in Depression analysis and evaluating the induced emotions through a large set of video clips from various movies. With the combination of deep learning techniques and FDHH, state-of-the-art results are achieved for Depression analysis.
3

Trajectory-based Arrival Time Prediction using Gaussian Processes : A motion pattern modeling approach

Callh, Sebastian January 2019 (has links)
As cities grow, efficient public transport systems are becoming increasingly important. To offer a more efficient service, public transport providers use systems that predict arrival times of buses, trains and similar vehicles, and present this information to the general public. The accuracy and reliability of these predictions are paramount, since many people depend on them, and erroneous predictions reflect badly on the public transport provider. When public transport vehicles move throughout the cities, they create motion patterns, which describe how their positions change over time. This thesis proposes a way of modeling their motion patterns using Gaussian processes, and investigates whether it is possible to predict the arrival times of public transport buses in Linköping based on their motion patterns. The results are evaluated by comparing the accuracy of the model with a simple baseline model and a recurrent neural network (RNN), and the results show that the proposed model achieves superior performance to that of an RNN trained on the same amounts of data, with excellent explainability and quantifiable uncertainty. However, an RNN is capable of training on much more data than the proposed model in the same amount of time, so in a scenario with large amounts of data the RNN outperforms the proposed model.
4

Representation and Learning for Sign Language Recognition

Nayak, Sunita 17 January 2008 (has links)
While recognizing some kinds of human motion patterns requires detailed feature representation and tracking, many of them can be recognized using global features. The global configuration or structure of an object in a frame can be expressed as a probability density function constructed using relational attributes between low-level features, e.g. edge pixels that are extracted from the regions of interest. The probability density changes with motion, tracing a trajectory in the latent space of distributions, which we call the configuration space. These trajectories can then be used for recognition using standard techniques such as dynamic time warping. Can these frame-wise probability functions, which usually have high dimensionality, be embedded into a low-dimensional space so that we can still estimate various meaningful probabilistic distances in the new space? Given these trajectory-based representations, can one learn models of signs in an unsupervised manner? We address these two fundamental questions in this dissertation. Existing embedding approaches do not extend easily to preserve meaningful probabilistic distances between the samples. We present an embedding framework to preserve the probabilistic distances like Chernoff, Bhattacharya, Matusita, KL or symmetric-KL based on dot-products between points in this space. It results in computational savings. We experiment with the five different probabilistic distance measures and show the usefulness of the representation in three different contexts - sign recognition of 147 different signs (with large number of possible classes), gesture recognition with 7 different gestures performed by 7 different persons (with person variations) and classification of 8 different kinds of human-human interaction sequences (with segmentation problems). Currently, researchers in continuous sign language recognition assume that the training signs are already available and often those are manually selected from continuous sentences. It consumes a lot of human time and is tedious. We present an approach for automatically learning signs from multiple sentences by using a probabilistic framework to extract the parts of signs that are present in most of its occurrences, and are robust to variations produced by adjacent signs. We show results by learning 10 signs and 10 spoken words from 136 sign language sentences and 136 spoken sequences respectively.

Page generated in 0.1166 seconds