• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Five Whys Root Cause System Effectiveness: A Two Factor Quantitative Review

Key, Barbara A. 01 April 2019 (has links)
Several tools exist for root cause analysis (RCA). Despite this however, many practitioners are not obtaining the quality improvement desired. Those turning to literature for guidance would find most of the information resides in case studies with anecdotal outcomes. Since 5 Whys analysis has been one of the more pervasive tools in use, this study sought to add to the root cause analysis body of knowledge by investigating tool support factors. While studied in conjunction with 5 Whys, the support variables lend themselves to other root cause analysis tools as well. The purpose of the study was to utilize a 2 x 2 factorial design to determine the significance and effect on RCA effectiveness, of using a 5 Whys trained facilitator and action level classification. During the study, problem solving teams at service centers of a North American electric repair company conducted analysis with or without a trained facilitator. Additionally, corrective actions were or were not categorized by defined levels of ability to impact defect prevention. The dependent variable of effectiveness was determined by scoring from a weighted list of best practices for problem solving analysis. Analysis showed trained facilitators had significant effect on problem solving solutions, while classification had minimal
2

Improved Multi-resolution Analysis of the Motion Patterns in Video for Human Action Classification

Shabani, Hossein January 2011 (has links)
The automatic recognition of human actions in video is of great interest in many applications such as automated surveillance, content-based video summarization, video search, and indexing. The problem is challenging due to a wide range of variations among the motion pattern of a given action such as walking across different subjects and the low variations among similar motions such as running and jogging. This thesis has three contributions in a discriminative bottom-up framework to improve the multi-resolution analysis of the motion patterns in video for better recognition of human actions. The first contribution of this thesis is the introduction of a novel approach for a robust local motion feature detection in video. To this end, four different multi-resolution temporally causal and asymmetric filters of log Gaussian, scale-derivative Gaussian, Poisson, and asymmetric sinc are introduced. The performance of these filters is compared with the widely used multi-resolution Gabor filter in a common framework for detection of local salient motions. The features obtained from the asymmetric filtering are more precise and more robust under geometric deformations such as view change or affine transformations. Moreover, they provide higher classification accuracy when they are used with a standard bag-of-words representation of actions and a single discriminative classifier. The experimental results show that the asymmetric sinc performs the best. The Poisson and the scale-derivative Gaussian perform better than log Gaussian and that better than the symmetric temporal Gabor filter. The second contribution of this thesis is the introduction of an efficient action representation. The observation is that the salient features at different spatial and temporal scales characterize different motion information. A multi-resolution analysis of the motion characteristic should be representative of different actions. A multi-resolution action signature provides a more discriminative video representation. The third contribution of this thesis is on the classification of different human actions. To this end, an ensemble of classifiers in a multiple classifier systems (MCS) framework with a parallel topology is utilized. This framework can fully benefit from the multi-resolution characteristics of the motion patterns in the human actions. The classification combination concept of the MCS has been then extended to address two problems in the configuration setting of a recognition framework, namely the choice of distance metric for comparing the action representations and the size of the codebook by which an action is represented. This implication of MCS at multiple stages of the recognition pipeline provides a multi-stage MCS framework which outperforms the existing methods which use a single classifier. Based on the experimental results of the local feature detection and the action classification, the multi-stage MCS framework, which uses the multi-scale features obtained from the temporal asymmetric sinc filtering, is recommended for the task of human action recognition in video.
3

Quoric manifolds

Hopkinson, Jeremy Franklin Lawrence January 2012 (has links)
Davis and Januszkiewicz introduced in 1981 a family of compact real manifolds, the Quasi-Toric Manifolds, with a group action by a torus, a direct product of circle (T) groups. Their manifolds have an orbit space which is a simple polytope with a distinct isotropy subgroup associated to each face of the polytope, subject to some consistency conditions. They defined a characteristic function which captured the properties of the isotropy subgroups, and showed that their manifolds can be classified by the polytope and characteristic function. They further showed that the cohomology ring of the manifold can be written down directly from properties derived from the polytope and the characteristic function. This work considers the question of how far the circle group T can be replaced by the group of unit quaternions Q in the construction and description of quasi-toric manifolds. Unlike T, the group Q is not commutative, so the actions of Q n on the product H n of the set of quaternions using quaternionic multiplication are studied in detail. Then, in direct analogy to the quasi-toric manifolds, a family of compact real manifolds, the Quoric Manifolds, is introduced which have an action by Q n, and whose orbit space is a polytope. A characteristic functor is defined on the faces of the polytope which captures the properties of the isotropy classes of the orbits of the action. It is shown that quoric manifolds can be classified in a manner similar to the quasi-toric manifolds, by the polytope and characteristic functor. A restricted family, the global quoric manifolds, which satisfy an additional condition are defined. It is shown that an infinite number of polytopes exist in any dimension over which a global quoric manifold can be defined. It is shown that any global quoric manifold can be described as a quotient space of a moment angle complex over the polytope, and that its integral cohomology ring can be calculated, taking a form analagous to that in the quasi-toric case.
4

Video Action Understanding: Action Classification, Temporal Localization, And Detection

Tirupattur, Praveen 01 January 2024 (has links) (PDF)
Video action understanding involves comprehending actions performed by humans, depicted in videos. Central to the task of video action understanding are four fundamental questions: What, When, Where, and Who. These questions encapsulate the essence of action classification, temporal action localization, action detection, and actor recognition. Despite notable progress in research related to these tasks, many challenges persist and in this dissertation, we propose innovative solutions to tackle these challenges head-on. First, we address the challenges in action classification (``What?"), specifically related to multi-view action recognition. We propose a novel transformer decoder-based model, with learnable view and action queries, to enforce the learning of action features robust to shifts in viewpoints. Next, we focus on temporal action localization (``What?" and ``When?") and address challenges introduced in the multi-label setting. Our proposed solution involves leveraging the inherent relationships between complex actions in real-world videos. We introduce an attention-based architecture that models these relationships for the task of temporal action localization. Next, we propose \textit{Gabriella}, a real-time online system for activity detection (``What?", ``When?", and ``Where?") in security videos. Our proposed solution has three stages: tubelet extraction, activity classification, and online tubelet merging. For tubelet extraction, we propose a localization network that detects potential foreground regions to generate action tubelets. The detected tubelets are assigned activity class scores by the classification network and merged using our proposed Tubelet-Merge Action-Split (TMAS) algorithm to form the final action detections. Finally, we introduce an approach to solve the novel task of joint action and actor recognition (``What?" and ``Who?") and solve it using disentangled representation learning. We introduce a novel method to simultaneously identify both subjects (actors) and their actions. Our transformer-based model learns to separate actor and action features effectively by employing supervised contrastive losses alongside standard cross-entropy loss to ensure proper feature separation.
5

Character strengths and virtues of young internationally adopted Chinese children: A longitudinal study from preschool to school age

Loker, Troy 01 June 2009 (has links)
Shifting from traditional deficit-based psychological research, the current study aimed to broaden the understanding of post-adoption development through a strength-based approach and further explore the recently developed Values in Action (VIA) Classification of Character Strengths among a particularly resilient population of young children-internationally adopted Chinese children. Archival longitudinal data of parents' descriptions about their adopted Chinese children's positive characteristics were analyzed from two time points two years apart. Data on 179 children ages 4 - 5 years old (M = 59.67 months SD = 6.60 months) in Time 1 from 172 families were analyzed with content analysis coding procedures. Overall, the profile of character strengths among young Chinese adoptees was very comparable to that of a general sample of young children assessed in a previous research study: Both samples had 11 of the 24 character strengths from the VIA Classification represented among 10% or more of the children, while the remaining character strengths were rarely represented in the children's data. The five most prevalent character strengths for Chinese adoptees were Love, Kindness, Humor, Zest, and Social Intelligence. The biggest difference between adopted Chinese children from this study and non-adopted children was that Zest and Social Intelligence were represented at much higher rates. There were no significant changes over time in all but one of the prevalence rates for character strengths (i.e., Love decreased from Time 1 to Time 2) and for the more broadly categorized virtues (i.e., Courage increased from Time 1 to Time 2). The two most prevalent virtues, Humanity and Courage, were associated with lower levels of externalizing and internalizing problems, respectively, which may point to the positive traits particularly related to this population's marked resilience. Results serve to provide a broader understanding of post-adoption development and offer the first longitudinal data on character strengths among young children.
6

Modeling and recognizing interactions between people, objects and scenes / Modélisation et reconnaissance des actions humaines dans les images

Delaitre, Vincent 07 April 2015 (has links)
Nous nous intéressons dans cette thèse à la modélisation des interactions entre personnes, objets et scènes. Nous montrons l’intérêt de combiner ces trois sources d’information pour améliorer la classification d’action et la compréhension automatique des scènes. Dans la première partie, nous cherchons à exploiter le contexte fourni par les objets et la scène pour améliorer la classification des actions humaines dans les photographies. Nous explorons différentes variantes du modèle dit de “bag-of-features” et proposons une méthode tirant avantage du contexte scénique. Nous proposons ensuite un nouveau modèle exploitant les objets pour la classification d’action basé sur des paires de détecteurs de parties du corps et/ou d’objet. Nous évaluons ces méthodes sur notre base de données d’images nouvellement collectée ainsi que sur trois autres jeux de données pour la classification d’action et obtenons des résultats proches de l’état de l’art. Dans la seconde partie de cette thèse, nous nous attaquons au problème inverse et cherchons à utiliser l’information contextuelle fournie par les personnes pour aider à la localisation des objets et à la compréhension des scènes. Nous collectons une nouvelle base de données de time-lapses comportant de nombreuses interactions entre personnes, objets et scènes. Nous développons une approche permettant de décrire une zone de l’image par la distribution des poses des personnes qui interagissent avec et nous utilisons cette représentation pour améliorer la localisation d’objets. De plus, nous démontrons qu’utiliser des informations provenant des personnes détectées peut améliorer plusieurs étapes de l’algorithme utilisé pour la compréhension des scènes d’intérieur. Pour finir, nous proposons des annotations 3D de notre base de time-lapses et montrons comment estimer l’espace utilisé par différentes classes d’objets dans une pièce. Pour résumer, les contributions de cette thèse sont les suivantes : (i) nous mettons au point des modèles pour la classification d’image tirant avantage du contexte scénique et des objets environnants et nous proposons une nouvelle base de données pour évaluer leurs performances, (ii) nous développons un nouveau modèle pour améliorer la localisation d’objet grâce à l’observation des acteurs humains interagissant avec une scène et nous le testons sur un nouveau jeu de vidéos comportant de nombreuses interactions entre personnes, objets et scènes, (iii) nous proposons la première méthode pour évaluer les volumes occupés par différentes classes d’objets dans une pièce, ce qui nous permet d’analyser les différentes étapes pour la compréhension automatique de scène d’intérieur et d’en identifier les principales sources d’erreurs. / In this thesis, we focus on modeling interactions between people, objects and scenes and show benefits of combining corresponding cues for improving both action classification and scene understanding. In the first part, we seek to exploit the scene and object context to improve action classification in still images. We explore alternative bag-of-features models and propose a method that takes advantage of the scene context. We then propose a new model exploiting the object context for action classification based on pairs of body part and object detectors. We evaluate our methods on our newly collected still image dataset as well as three other datasets for action classification and show performance close to the state of the art. In the second part of this thesis, we address the reverse problem and aim at using the contextual information provided by people to help object localization and scene understanding. We collect a new dataset of time-lapse videos involving people interacting with indoor scenes. We develop an approach to describe image regions by the distribution of human co-located poses and use this pose-based representation to improve object localization. We further demonstrate that people cues can improve several steps of existing pipelines for indoor scene understanding. Finally, we extend the annotation of our time-lapse dataset to 3D and show how to infer object labels for occupied 3D volumes of a scene. To summarize, the contributions of this thesis are the following: (i) we design action classification models for still images that take advantage of the scene and object context and we gather a new dataset to evaluate their performance, (ii) we develop a new model to improve object localization thanks to observations of people interacting with an indoor scene and test it on a new dataset centered on person, object and scene interactions, (iii) we propose the first method to evaluate the volumes occupied by different object classes in a room that allow us to analyze the current 3D scene understanding pipeline and identify its main source of errors.

Page generated in 0.1368 seconds