• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 351
  • 23
  • 19
  • 17
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 6
  • 3
  • 3
  • 3
  • Tagged with
  • 536
  • 536
  • 536
  • 147
  • 117
  • 109
  • 90
  • 68
  • 60
  • 59
  • 56
  • 56
  • 55
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Effects of stimulus class on short-term memory workload in complex information display formats

Tan, Kay Chuan 28 July 2008 (has links)
The objective of this research effort was to identify opportunities and demonstrate methods to reduce aircraft crew member cognitive workload (CWL) by reducing short-term memory (STM) demand. Two experiments qualitatively and quantitatively compared memory loading as a function of stimulus class. Experiment 1 employed a dual-task paradigm where the primary task was compensatory tracking used to load STM and the secondary task was item recognition using the Sternberg paradigm. Experiment 2 employed a singletask paradigm using a modified version of the Sternberg task. Digits, letters, colors, words, and geometrical shapes were tested as memory-set (MSET) items in the Sternberg task. Recognition latency and error rate served as objective measures of STM performance while the Subjective Workload Assessment Technique (SWAT) was employed as a Subjective second measure. Root Mean Square error was used to gauge tracking performance. Analyses of the experiments' results revealed that recognition latency and SWAT ratings Statistically varied as functions of stimulus class, MSET size, and the interaction between stimulus class and MSET size. Error rate was not statistically different across stimulus class or MSET size. Post-hoc analyses found SWAT to be a more sensitive STM measurement instrument than recognition latency or error rate. No statistically significant degree of secondary task intrusion on the tracking task was found. In addition to the commonly used classes of digits and letters, this research demonstrated that colors, words, and geometrical shapes could also be utilized as MSET items in short-term memory workload investigations. This research has, more importantly, provided further support for the vital link between STM demand and perceived workload. The main conclusion of this research is that stimulus class optimization can be a feasible method for reducing STM demand. Differences in processing rate among stimulus classes are large enough to impact visual display design. For many context-specific applications, it should be possible to determine the most efficient stimulus class in which to portray the needed information. The findings of this research are especially applicable in situations of elevated STM demand (e.g., aviation systems operations). In general, however, the results provide helpful information for visual display designers. / Ph. D.
312

Relation of visuospatial and analytical skills and span of short-term memory to academic achievement in high school geometry

Brown, Martha 05 September 2009 (has links)
The purpose of this research was to investigate hypothesized relations of visuospatial and logical reasoning skills, and span of short-term memory to achievement in geometry. In addition, major subfactors of visuospatial ability (visualization, speeded rotations, spatial orientation, and disembedding) were assessed to determine which were significant predictors of geometry achievement. Vernon's (1965) model of intelligence and Baddeley's model of working memory provided the theoretical framework for these hypotheses. Subjects (N = 110) were students in seven sophomore level geometry classes in two schools in southwest Virginia. Cognitive measures of speeded rotations, visualization, spatial orientation, disembedding, Gestalt closure, logical reasoning, and short-term memory span were administered. Two measures of geometry achievement were used: The standardized New York Regents Geometry Exam, and z-transformations of the classroom final grade. A model of geometry achievement is proposed and major predictions of the model were supported. within this sample, regression analysis showed the measures of visualization, logical reasoning, and short-term memory predicted achievement on the New York Regents Geometry Exam. Separate regression analyses for each gender revealed visualization predicted geometry achievement for the girls, while logical reasoning and short-term memory span predicted geometry achievement for the boys. Gender differences favoring boys were found on measures of speeded rotations, spatial orientation, and Gestalt closure. Girls had significantly higher scores on the measure of short-term memory span and the classroom measure of geometry achievement. / Master of Science
313

Control of Grid-Connected Converters using Deep Learning

Ghidewon-Abay, Sengal 12 January 2023 (has links)
With the rise of inverter-based resources (IBRs) within the power system, the control of grid-connected converters (GCC) has become pertinent due to the fact they interface IBRs to the grid. The conventional method of control for grid-connected converters (GCCs) such as the voltage-sourced converter (VSC) is through a decoupled control loop in the synchronous reference frame. However, this model-based control method is sensitive to parameter changes causing deterioration in controller performance. Data-driven approaches such as machine learning can be utilized to design controllers that are capable of operating GCCs in various system conditions. This work reviews different machine learning applications in power systems as well as the conventional method of controlling a VSC. It explores a deep learning-based control method for a three-phase grid-connected VSC, specifically utilizing a long short-term memory (LSTM) network for robust control. Simulations of a conventional controlled VSC are conducted using Simulink to collect data for training the LSTM-based controller. The LSTM model is built and trained using the Keras and TensorFlow libraries in Python and tested in Simulink. The performance of the LSTM-based controller is evaluated under different case studies and compared to the conventional method of control. Simulation results demonstrate the effectiveness of this approach by outperforming the conventional controller and maintaining stability under different system parameter changes. / Master of Science / The desire to minimize the use of fossil fuels and reduce carbon footprints has increased the usage of renewable energies also known as inverter-based resources (IBRs) within the power grid. These resources add a level of complexity to operating the grid due to the fluctuating nature of IBRs and are connected to the power grid through grid-connected converters (GCCs). The control method conventionally used for GCCs is derived by accounting for the system parameters, creating a mathematical model under constant parameters. However, the parameters of the system are susceptible to changes under different operating and environmental conditions. This results in poor performance from the controller under various operating conditions due to its inability to be adaptive to the system. Data-driven approaches such as machine learning are becoming increasingly popular for their ability to capture the dynamics of a system with limited knowledge. The different applications of machine learning within power systems include fault diagnosis, energy management, and cyber security. This work explores the use of utilizing deep learning techniques for a robust approach of controlling GCCs.
314

Dynamic Load Modeling from PSSE-Simulated Disturbance Data using Machine Learning

Gyawali, Sanij 14 October 2020 (has links)
Load models have evolved from simple ZIP model to composite model that incorporates the transient dynamics of motor loads. This research utilizes the latest trend on Machine Learning and builds reliable and accurate composite load model. A composite load model is a combination of static (ZIP) model paralleled with a dynamic model. The dynamic model, recommended by Western Electricity Coordinating Council (WECC), is an induction motor representation. In this research, a dual cage induction motor with 20 parameters pertaining to its dynamic behavior, starting behavior, and per unit calculations is used as a dynamic model. For machine learning algorithms, a large amount of data is required. The required PMU field data and the corresponding system models are considered Critical Energy Infrastructure Information (CEII) and its access is limited. The next best option for the required amount of data is from a simulating environment like PSSE. The IEEE 118 bus system is used as a test setup in PSSE and dynamic simulations generate the required data samples. Each of the samples contains data on Bus Voltage, Bus Current, and Bus Frequency with corresponding induction motor parameters as target variables. It was determined that the Artificial Neural Network (ANN) with multivariate input to single parameter output approach worked best. Recurrent Neural Network (RNN) is also experimented side by side to see if an additional set of information of timestamps would help the model prediction. Moreover, a different definition of a dynamic model with a transfer function-based load is also studied. Here, the dynamic model is defined as a mathematical representation of the relation between bus voltage, bus frequency, and active/reactive power flowing in the bus. With this form of load representation, Long-Short Term Memory (LSTM), a variation of RNN, performed better than the concurrent algorithms like Support Vector Regression (SVR). The result of this study is a load model consisting of parameters defining the load at load bus whose predictions are compared against simulated parameters to examine their validity for use in contingency analysis. / Master of Science / Independent system Operators (ISO) and Distribution system operators (DSO) have a responsibility to provide uninterrupted power supply to consumers. That along with the longing to keep operating cost minimum, engineers and planners study the system beforehand and seek to find the optimum capacity for each of the power system elements like generators, transformers, transmission lines, etc. Then they test the overall system using power system models, which are mathematical representation of the real components, to verify the stability and strength of the system. However, the verification is only as good as the system models that are used. As most of the power systems components are controlled by the operators themselves, it is easy to develop a model from their perspective. The load is the only component controlled by consumers. Hence, the necessity of better load models. Several studies have been made on static load modeling and the performance is on par with real behavior. But dynamic loading, which is a load behavior dependent on time, is rather difficult to model. Some attempts on dynamic load modeling can be found already. Physical component-based and mathematical transfer function based dynamic models are quite widely used for the study. These load structures are largely accepted as a good representation of the systems dynamic behavior. With a load structure in hand, the next task is estimating their parameters. In this research, we tested out some new machine learning methods to accurately estimate the parameters. Thousands of simulated data are used to train machine learning models. After training, we validated the models on some other unseen data. This study finally goes on to recommend better methods to load modeling.
315

Language Development and Verbal Encoding: Implications for Individual Differences in Short-Term Memory in 3-Year-Olds

Cardell, Annie Maria 12 June 2007 (has links)
There is evidence that language ability is related to a number of cognitive processes, including memory. This study used EEG to investigate the extent to which verbal encoding strategies account for individual differences in short-term recognition memory performance in 44 3-year-olds. As hypothesized, children with better language ability (as measured by the PPVT-III) performed better on the memory task. Analyses of EEG power at the hypothesized electrode sites were not significant, but the hypothesis that children who perform better on the recognition memory task will use more verbal encoding strategies than children who perform less well was partially supported by EEG coherence analyses. Children in the high memory group had significantly greater frontal-temporal coherence in the left hemisphere (F7-T3) than the low memory group. However, this was true both at baseline and during encoding, implying that children in the high memory group have greater overall connectivity between these brain areas and that they tend to use more verbal strategies than the low memory group, as they interact with their environments in general, not just during a memory task. / Master of Science
316

Bilingual Cyber-aggression Detection on Social Media using LSTM Autoencoder

Kumari, K., Singh, J.P., Dwivedi, Y.K., Rana, Nripendra P. 05 April 2021 (has links)
Yes / Cyber-aggression is an offensive behaviour attacking people based on race, ethnicity, religion, gender, sexual orientation, and other traits. It has become a major issue plaguing the online social media. In this research, we have developed a deep learning-based model to identify different levels of aggression (direct, indirect and no aggression) in a social media post in a bilingual scenario. The model is an autoencoder built using the LSTM network and trained with non-aggressive comments only. Any aggressive comment (direct or indirect) will be regarded as an anomaly to the system and will be marked as Overtly (direct) or Covertly (indirect) aggressive comment depending on the reconstruction loss by the autoencoder. The validation results on the dataset from two popular social media sites: Facebook and Twitter with bilingual (English and Hindi) data outperformed the current state-of-the-art models with improvements of more than 11% on the test sets of the English dataset and more than 6% on the test sets of the Hindi dataset.
317

The reference frame for encoding and retention of motion depends on stimulus set size

Huynh, D.L., Tripathy, Srimant P., Bedell, H.E., Ogmen, Haluk 01 2017 (has links)
Yes / The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
318

Bottlenecks of motion processing during a visual glance: the leaky flask model

Ögmen, H., Ekiz, O., Huynh, D., Bedell, H.E., Tripathy, Srimant P. 31 December 2013 (has links)
Yes / Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. / Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH).
319

Effects of response delay on performance accuracy in aphasia: influences of stimulus duration, task, and locus of temporal impairment

Sayers, Matthew, 0000-0003-1088-6792 05 1900 (has links)
Including response delays in language tasks yields variable results for people with aphasia. Some improve relative to immediate response conditions, others perform more poorly, and others show no significant change. The effect of response delays on performance accuracy varies within individuals depending on the language task. The mechanism that drives this heterogeneity is unknown. In these studies, we investigated factors contributing to differences in performance accuracy in naming and repetition following a response delay. In Experiment 1, we explored the contribution of stimulus duration to accuracy in delay conditions by manipulating picture exposure time in naming. We hypothesized that shorter picture exposure times would lead to lower accuracy in delayed naming conditions, similar to an established trend in delayed repetition (Sayers et al. 2023c). Shorter picture exposure time was associated with lower accuracy and greater variety of temporal impairments (i.e., improved or poorer performance) than naming with longer exposure times. Shortening picture exposure time may reduce contributions of visual semantics, increasing reliance on the language system. In Experiment 2, we compared the measures of the timeliness of semantic and phonological activation transmission with performance in immediate and delayed naming and repetition. Slower semantic activation transmission was associated with relatively improved performance in delayed naming but poorer performance in delayed repetition. We attribute this to the order of access to linguistic representations. In naming, semantic activation initiates the retrieval process, while in repetition, it supports phonological activation, making it more enduring in the face of decay in delayed response conditions. / Communication Sciences
320

[en] A DEPENDENCY TREE ARC FILTER / [pt] UM FILTRO PARA ARCOS EM ÁRVORES DE DEPENDÊNCIA

RENATO SAYAO CRYSTALLINO DA ROCHA 13 December 2018 (has links)
[pt] A tarefa de Processamento de Linguagem Natural consiste em analisar linguagens naturais de forma computacional, facilitando o desenvolvimento de programas capazes de utilizar dados falados ou escritos. Uma das tarefas mais importantes deste campo é a Análise de Dependência. Tal tarefa consiste em analisar a estrutura gramatical de frases visando extrair aprender dados sobre suas relações de dependência. Em uma sentença, essas relações se apresentam em formato de árvore, onde todas as palavras são interdependentes. Devido ao seu uso em uma grande variedade de aplicações como Tradução Automática e Identificação de Papéis Semânticos, diversas pesquisas com diferentes abordagens são feitas nessa área visando melhorar a acurácia das árvores previstas. Uma das abordagens em questão consiste em encarar o problema como uma tarefa de classificação de tokens e dividi-la em três classificadores diferentes, um para cada sub-tarefa, e depois juntar seus resultados de forma incremental. As sub-tarefas consistem em classificar, para cada par de palavras que possuam relação paidependente, a classe gramatical do pai, a posição relativa entre os dois e a distância relativa entre as palavras. Porém, observando pesquisas anteriores nessa abordagem, notamos que o gargalo está na terceira sub-tarefa, a predição da distância entre os tokens. Redes Neurais Recorrentes são modelos que nos permitem trabalhar utilizando sequências de vetores, tornando viáveis problemas de classificação onde tanto a entrada quanto a saída do problema são sequenciais, fazendo delas uma escolha natural para o problema. Esse trabalho utiliza-se de Redes Neurais Recorrentes, em específico Long Short-Term Memory, para realizar a tarefa de predição da distância entre palavras que possuam relações de dependência como um problema de classificação sequence-to-sequence. Para sua avaliação empírica, este trabalho segue a linha de pesquisas anteriores e utiliza os dados do corpus em português disponibilizado pela Conference on Computational Natural Language Learning 2006 Shared Task. O modelo resultante alcança 95.27 por cento de precisão, resultado que é melhor do que o obtido por pesquisas feitas anteriormente para o modelo incremental. / [en] The Natural Language Processing task consists of analyzing the grammatical structure of a sentence written in natural language aiming to learn, identify and extract information related to its dependency structure. This data can be structured like a tree, since every word in a sentence has a head-dependent relation to another word from the same sentence. Since Dependency Parsing is used in many applications like Machine Translation, Semantic Role Labeling and Part-Of-Speech Tagging, researchers aiming to improve the accuracy on their models are approaching this task in many different ways. One of the approaches consists in looking at this task as a token classification problem, using different classifiers for each sub-task and joining them in an incremental way. These sub-tasks consist in classifying, for each head-dependent pair, the Part-Of-Speech tag of the head, the relative position between the two words and the distance between them. However, previous researches using this approach show that the bottleneck lies in the distance classifier. Recurrent Neural Networks are a kind of Neural Network that allows us to work using sequences of vectors, allowing for classification problems where both our input and output are sequences, making them a great choice for the problem at hand. This work studies the use of Recurrent Neural Networks, in specific Long Short-Term Memory networks, for the head-dependent distance classifier sub-task as a sequence-to-sequence classification problem. To evaluate its efficiency, this work follows the line of previous researches and makes use of the Portuguese corpus of the Conference on Computational Natural Language Learning 2006 Shared Task. The resulting model attains 95.27 percent precision, which is better than the previous results obtained using incremental models.

Page generated in 0.071 seconds