• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 230
  • 120
  • 79
  • 48
  • 28
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 615
  • 194
  • 86
  • 82
  • 61
  • 47
  • 46
  • 46
  • 44
  • 41
  • 40
  • 37
  • 36
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Changes in the Neural Bases of Emotion Regulation Associated with Clinical Improvement in Children with Anxiety Disorders

Hum, Kathryn 13 December 2012 (has links)
Background: The present study was designed to examine prefrontal cortical processes in anxious children that mediate cognitive regulation in response to emotion-eliciting stimuli, and the changes that occur after anxious children participate in a cognitive behavioral therapy treatment program. Methods: Electroencephalographic activity was recorded from clinically anxious children and typically developing children at pre- and post-treatment sessions. Event-related potential components were recorded while children performed a go/no-go task using facial stimuli depicting angry, calm, and happy expressions. Results: At pre-treatment, anxious children had significantly greater posterior P1 and frontal N2 amplitudes than typically developing children, components associated with attention/arousal and cognitive control, respectively. For the anxious group only, there were no differences in neural activation between face (emotion) types or trial (Go vs. No-go) types. Anxious children who did not improve with treatment showed increased cortical activation within the time window of the P1 at pre-treatment relative to comparison and improver children. From pre- to post-treatment, only anxious children who improved with treatment showed increased cortical activation within the time window of the N2. Conclusions: At pre-treatment, anxious children appeared to show increased cortical activation regardless of the emotional content of the stimuli. Anxious children also showed greater medial-frontal activity regardless of task demands and response accuracy. These findings suggest indiscriminate cortical processes that may underlie the hypervigilant regulatory style seen in clinically anxious individuals. Neural activation patterns following treatment suggest that heightened perceptual vigilance, as represented by increased P1 amplitudes for non-improvers, may have prevented these anxious children from learning the treatment strategies, leading to poorer outcomes. Increased cognitive control, as represented by increased N2 amplitudes for improvers, may have enabled these anxious children to implement treatment strategies more effectively, leading to improved treatment outcomes. Hence, P1 activation may serve as a predictor of treatment outcome, while N2 activation may serve as an indicator of treatment-related outcome. These findings point to the cortical processes that maintain maladaptive functioning versus the cortical processes that underlie successful intervention in clinically anxious children.
92

代数初学者の文字式に対する認識

清水, 明子, Shimizu, Akiko 25 December 1998 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
93

Statistical modeling of multiword expressions

Su, Kim Nam January 2008 (has links)
In natural languages, words can occur in single units called simplex words or in a group of simplex words that function as a single unit, called multiword expressions (MWEs). Although MWEs are similar to simplex words in their syntax and semantics, they pose their own sets of challenges (Sag et al. 2002). MWEs are arguably one of the biggest roadblocks in computational linguistics due to the bewildering range of syntactic, semantic, pragmatic and statistical idiomaticity they are associated with, and their high productivity. In addition, the large numbers in which they occur demand specialized handling. Moreover, dealing with MWEs has a broad range of applications, from syntactic disambiguation to semantic analysis in natural language processing (NLP) (Wacholder and Song 2003; Piao et al. 2003; Baldwin et al. 2004; Venkatapathy and Joshi 2006). / Our goals in this research are: to use computational techniques to shed light on the underlying linguistic processes giving rise to MWEs across constructions and languages; to generalize existing techniques by abstracting away from individual MWE types; and finally to exemplify the utility of MWE interpretation within general NLP tasks. / In this thesis, we target English MWEs due to resource availability. In particular, we focus on noun compounds (NCs) and verb-particle constructions (VPCs) due to their high productivity and frequency. / Challenges in processing noun compounds are: (1) interpreting the semantic relation (SR) that represents the underlying connection between the head noun and modifier(s); (2) resolving syntactic ambiguity in NCs comprising three or more terms; and (3) analyzing the impact of word sense on noun compound interpretation. Our basic approach to interpreting NCs relies on the semantic similarity of the NC components using firstly a nearest-neighbor method (Chapter 5), then verb semantics based on the observation that it is often an underlying verb that relates the nouns in NCs (Chapter 6), and finally semantic variation within NC sense collocations, in combination with bootstrapping (Chapter 7). / Challenges in dealing with verb-particle constructions are: (1) identifying VPCs in raw text data (Chapter 8); and (2) modeling the semantic compositionality of VPCs (Chapter 5). We place particular focus on identifying VPCs in context, and measuring the compositionality of unseen VPCs in order to predict their meaning. Our primary approach to the identification task is to adapt localized context information derived from linguistic features of VPCs to distinguish between VPCs and simple verb-PP combinations. To measure the compositionality of VPCs, we use semantic similarity among VPCs by testing the semantic contribution of each component. / Finally, we conclude the thesis with a chapter-by-chapter summary and outline of the findings of our work, suggestions of potential NLP applications, and a presentation of further research directions (Chapter 9).
94

Sensitivity to Emotion Specified in Facial Expressions and the Impact of Aging and Alzheimer's Disease

McLellan, Tracey Lee January 2008 (has links)
This thesis describes a program of research that investigated the sensitivity of healthy young adults, healthy older adults and individuals with Alzheimer’s disease (AD) to happiness, sadness and fear emotion specified in facial expressions. In particular, the research investigated the sensitivity of these individuals to the distinctions between spontaneous expressions of emotional experience (genuine expressions) and deliberate, simulated expressions of emotional experience (posed expressions). The specific focus was to examine whether aging and/or AD effects sensitivity to the target emotions. Emotion-categorization and priming tasks were completed by all participants. The tasks employed an original set of cologically valid facial displays generated specifically for the present research. The categorization task (Experiments 1a, 2a, 3a, 4a) required participants to judge whether targets were, or were not showing and feeling each target emotion. The results showed that all 3 groups identified a genuine expression as both showing and feeling the target emotion whilst a posed expression was identified more frequently as showing than feeling the emotion. Signal detection analysis demonstrated that all 3 groups were sensitive to the expression of emotion, reliably differentiating expressions of experienced emotion (genuine expression) from expressions unrelated to emotional experience (posed and neutral expressions). In addition, both healthy young and older adults could reliably differentiate between posed and genuine expressions of happiness and sadness, whereas, individuals with AD could not. Sensitivity to emotion specified in facial expressions was found to be emotion specific and to be independent of both the level of general cognitive functioning and of specific cognitive functions. The priming task (Experiments 1b, 2b, 3b,4b) employed the facial expressions as primes in a word valence task in order to investigate spontaneous attention to facial expression. Healthy young adults only showed an emotion-congruency priming effect for genuine expressions. Healthy older adults and individuals with AD showed no priming effects. Results are discussed in terms of the understanding of the recognition of emotional states in others and the impact of aging and AD on the recognition of emotional states. Consideration is given to how these findings might influence the care and management of individuals with AD.
95

A Real Time Facial Expression Recognition System Using Deep Learning

Miao, Yu 27 November 2018 (has links)
This thesis presents an image-based real-time facial expression recognition system that is capable of recognizing basic facial expressions of several subjects simultaneously from a webcam. Our proposed methodology combines a supervised transfer learning strategy and a joint supervision method with a new supervision signal that is crucial for facial tasks. A convolutional neural network (CNN) model, MobileNet, that contains both accuracy and speed is deployed in both offline and real-time frameworks to enable fast and accurate real-time output. Evaluations for both offline and real-time experiments are provided in our work. The offline evaluation is carried out by first evaluating two publicly available datasets, JAFFE and CK+, and then presenting the results of the cross-dataset evaluation between these two datasets to verify the generalization ability of the proposed method. A comprehensive evaluation configuration for the CK+ dataset is given in this work, providing a baseline for a fair comparison. It reaches an accuracy of 95.24% on JAFFE dataset, and an accuracy of 96.92% on 6-class CK+ dataset which only contains the last frames of image sequences. The resulting average run-time cost for recognition in the real-time implementation is reported, which is approximately 3.57 ms/frame on an NVIDIA Quadro K4200 GPU. The results demonstrate that our proposed CNN-based framework for facial expression recognition, which does not require a massive preprocessing module, can not only achieve state-of-art accuracy on these two datasets but also perform the classification task much faster than a conventional machine learning methodology as a result of the lightweight structure of MobileNet.
96

Automatické propojování lexikografických zdrojů a korpusových dat. / Automatic linking of lexicographic sources and corpus data

Bejček, Eduard January 2015 (has links)
Along with the increasing development of language resources - i.e., new lexicons, lexical databases, corpora, treebanks - the need for their efficient interlinking is growing. With such a linking, one can easily benefit from all their properties and information. Considering the convergence of resources, universal lexicographic formats are frequently discussed. In the present thesis, we investigate and analyse methods of interlinking language resources automatically. We introduce a system for interlinking lexicons (such as VALLEX, PDT-Vallex, FrameNet or SemLex) that offer information on syntactic properties of their entries. The system is automated and can be used repeatedly with newer versions of lexicons under development. We also design a method for identification of multiword expressions in a parsed text based on syntactic information from the SemLex lexicon. An output that verifies feasibility of the used methods is, among others, the mapping between the VALLEX and the PDT-Vallex lexicons, resulting in tens of thousands of annotated treebank sentences from the PDT and the PCEDT treebanks added into VALLEX. Powered by TCPDF (www.tcpdf.org)
97

A comprehensive study of referring expressions in ASL

Czubek, Todd Alan 18 March 2018 (has links)
Substantial research has examined how linguistic structures are realized in the visual/spatial modality. However, we know less about linguistic pragmatics in signed languages, particularly the functioning of referring expressions (REs). Recent research has explored how REs are deployed in signed languages, but much remains to be learned. Study 1 explores the inventory and workings of REs in American Sign Language by seeking to replicate and build upon Frederiksen & Mayberry (2016). Following Ariel, F&M propose an inventory of REs in ASL ranked according to the typical accessibility of the referents each RE type signals. Study 1 reproduced their results using more complex narratives and including a wider range of REs in various syntactic roles. Using Toole’s (1997) accessibility rating protocol, we calculated average accessibility ratings for each RE type, thus making possible statistical analyses that show more precisely which REs differ significantly in average accessibility. Further, several RE types that F&M had collapsed are shown to be distinct. Finally, we find general similarities between allocations of REs in ASL and in spoken English, based on 6 matched narratives produced by native English speakers. Study 2 explores a previously unexamined set of questions about concurrently occurring REs: collections of REs produced simultaneously. It compares isolated REs that occur in a linear fashion, similar to spoken language grammars, with co-occurring REs, signaling multiple referents simultaneously (termed here constellations). This study asks whether REs in constellations have pragmatic properties different from those of isolated/linear REs. Statistical evidence is presented that some categories of REs do differ significantly in the average accessibility values of their referents, when compared across linear versus concurrent configurations. Study 3 examines whether the proportions of various RE categories used by native ASL signers vary according to the recipient’s familiarity with the narrative. Do ASL narratives designed to be maximally explicit because of low recipient familiarity demonstrate distinct RE allocations? In this sample of 34 narratives, there is no statistically significant difference in RE use attributable to recipient familiarity. These findings have important implications for understanding the impact of modality on accessibility, the use of REs in ASL, and visual processing.
98

Facial Behavior and Pair Bonds in Hylobatids

Florkiewicz, Brittany Nicole 01 May 2016 (has links)
Among primates, humans have the largest and most complex facial repertoires, followed not by their closest living hominid relatives but by hylobatids. Facial behavior is an important component of primate communication that transfers and modulates intentions and motivations. However, why great variation in primate facial expressions evolved and why hylobatid facial repertoires seem to be more similar to humans than other apes is unclear. The current study compared 206 hours of video and 103 hours of focal animal data of facial expression repertoires, measures of pair bond strength, and behavioral synchrony of ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center, Santa Clarita, CA. This study explored whether facial repertoire breath or frequency were linked to social parameters of pair-bonds, how facial expressions related to behavioral synchrony, and if facial feedback (i.e., the transfer of behaviors and intentions by mimicking observed facial expressions) were important between pair-partners. Intra-pair facial repertoires correlated strongly with repertoire composition and rate of use, suggesting that facial feedback was important, while behavioral synchrony showed no correlation with facial behavior. The results of this study suggest that larger facial repertoires contribute to strengthening pair bonds, because richer facial repertoires provide more opportunities for facial feedback which effectively creates a better ‘understanding’ between partners through smoother and better coordinated interaction patterns.
99

APOLOGY STRATEGIES: A COMPARISON OF SAUDI ENGLISH LEARNERS AND NATIVE SPEAKERS OF AMERICAN ENGLISH

Binasfour, Hajar Salman 01 May 2014 (has links)
This study compares the speech acts of apology of Saudi learners of English with those of American English native speakers to investigate the intercultural communication competence of second language learners. The investigation is based on 120 apology responses from Saudi learners of English and native speakers of American English. The responses were collected through a discourse completion task. The participants from both groups utilized the same five strategies mentioned by Cohen and Olshtain (1981): apology expressions, explanations, promises of forbearance, acknowledgments of responsibility, and offers of repair. Results showed no difference in the types of apology strategies adopted, but the frequency of using these strategies varied. The frequency of use of the strategies significantly varied only for the offers of repair and promises of forbearance. The results also indicated that the two most universal strategies used were apology expressions and explanations. Furthermore, the most common strategies that were often combined together were expressions and explanations. This study supports Taguchi's (2011) statement on the possible effect of learners' English proficiency on their speech act productions. Moreover, social power has a noticeable impact on students' production of the five apology strategies. Results indicated that the higher the social power of the offended, the more apology strategies he/she seemed to have received. Results from the current study and studies like this are informative to not only the speech act literature but also the study of intercultural communication, the globalization of American universities, and the development of Saudi cultural missions.
100

Teaching Idiomatic Expressions to Children with Developmental Delays Using the PEAK Relational Training System

Eberhardt, Brittney Elizabeth 01 December 2016 (has links)
AN ABSTRACT OF THE THESIS OF BRITTNEY E. EBERHARDT, for the Master of Science degree in Behavior Analysis and Therapy, presented on August 2016, at Southern Illinois University Carbondale. TITLE: TEACHING IDIOMATIC EXPRESSIONS TO CHILDREN WITH DEVELOPMENTAL DELAYS USING THE PEAK RELATIONAL TRAINING SYSTEM MAJOR PROFESSOR: Dr. Mark R. Dixon Idiomatic expressions are commonly used phrases, which require the listener to interpret the meaning figuratively rather than literally. The purpose of this study was to expand the research in the area of stimulus equivalence to determine whether untaught symmetrical and transitive responding in relation to idiomatic expressions would emerge for two participants with developmental delays. The first phase of the study involved directly training participants to respond with the statement (B stimuli; i.e.: “Go to bed.”) that corresponded with an intraverbal (A stimuli; i.e.: “What do you do at night after you put on your pajamas?”). After participants mastered these relationships, they were directly trained to respond with the idiomatic expression (C stimuli; i.e.: “Hit the hay”) when the experimenter verbally asked, “What is another way to say [A stimuli]?”. The results indicate that both participants achieved mastery criteria during the first phase of the study on A-B relations, however they were unable to demonstrate the derived equivalence A-C relation or C-B relation. After training on the B-C relationship, participants were again able to achieve criteria on the trained relationship and demonstrated some of the derived symmetrical relationships as well as derived transitive relationships. In addition, this study utilized the procedures from the PEAK-E relational training system to aid in replication in research and clinical practice.

Page generated in 0.108 seconds