1 |
Automatic Scenario Generation Using Procedural Modeling TechniquesMartin, Glenn Andrew 01 January 2012 (has links)
Training typically begins with a pre-existing scenario. The training exercise is performed and then an after action review is sometimes held. This “training pipeline” is repeated for each scenario that will be used that day. This approach is used routinely and often effectively, yet it has a number of aspects that can result in poor training. In particular, this process commonly has two associated events that are undesirable. First, scenarios are re-used over and over, which can reduce their effectiveness in training. Second, additional responsibility is placed on the individual training facilitator in that the trainer must now track performance improvements between scenarios. Taking both together can result in a multiplicative degradation in effectiveness. Within any simulation training exercise, a scenario definition is the starting point. While these are, unfortunately, re-used and over-used, they can, in fact, be generated from scratch each time. Typically, scenarios include the entire configuration for the simulators such as entities used, time of day, weather effects, entity starting locations and, where applicable, munitions effects. In addition, a background story (exercise briefing) is given to the trainees. The leader often then develops a mission plan that is shared with the trainee group. Given all of these issues, scientists began to explore more purposeful, targeted training. Rather than an ad-hoc creation of a simulation experience, there was an increased focus on the content of the experience and its effects on training. Previous work in scenario generation, interactive storytelling and computational approaches, while providing a good foundation, fall short on addressing the need for iv adaptive, automatic scenario generation. This dissertation addresses this need by building up a conceptual model to represent scenarios, mapping that conceptual model to a computational model, and then applying a newer procedural modeling technique, known as Functional L-systems, to create scenarios given a training objective, scenario complexity level desired, and sets of baseline and vignette scenario facets. A software package, known as PYTHAGORAS, was built and is presented that incorporates all these contributions into an actual tool for creating scenarios (both manual and automatic approaches are included). This package is then evaluated by subject matter experts in a scenario-based “Turing Test” of sorts where both system-generated scenarios and human-generated scenarios are evaluated by independent reviewers. The results are presented from various angles. Finally, a review of how such a tool can affect the training pipeline is included. In addition, a number of areas into which scenario generation can be expanded are reviewed. These focus on additional elements of both the training environment (e.g., buildings, interiors, etc.) and the training process (e.g., scenario write-ups, etc.).
|
2 |
On the Guided Construction of Learned Movements: Training in a User-Adaptive Virtual Environment to Enhance Motor LearningMayr, Riley January 2022 (has links)
No description available.
|
3 |
Improving reading performance in peripheral vision: An adaptive training methodTreleaven, Allison Jean 14 September 2016 (has links)
No description available.
|
4 |
Tailoring Instruction to the individual: Investigating the Utility of Trainee Aptitudes for use in Adaptive TrainingLandsberg, Carla 01 January 2015 (has links)
Computer-based training has become more prolific as the military and private business enterprises search for more efficient ways to deliver training. However, some methods of computer-based training are not more effective than traditional classroom methods. One technique that may be able to approximate the most effective form of training, one-on-one tutoring, is Adaptive Training (AT). AT techniques use instruction that is tailored to the learner in some way, and can adjust different training parameters such as difficulty, feedback, pace, and delivery mode. There are many ways to adapt training to the learner, and in this study I explored adapting the feedback provided to trainees based on spatial ability in line with Cognitive Load Theory (CLT). In line with the CLT expertise reversal effect literature I hypothesized that for a spatial task, higher ability trainees would perform better when they were given less feedback. Alternately, I hypothesized that lower ability trainees would perform better during training when they were given more support via feedback. This study also compared two different adaptation approaches. The first approach, called the ATI approach, adapts feedback based on a premeasured ability. In this case, it was spatial ability. The second approach, called the Hybrid approach adapts initially based on ability, but then based on performance later in training. I hypothesized that participants who received Hybrid adaptive training would perform better. The study employed a 2(spatial ability; high, low) X 2(feedback; matched, mismatched) X 2 (approach; ATI, Hybrid) between-subjects design in which participants were randomly assigned to one of the eight conditions. Ninety-two participants completed a submarine-based periscope operator task that was visual and spatial in nature. iv The results of the study did not support the use of CLT-derived adaptation based on spatial ability; contrary to what was hypothesized, higher ability participants who received more feedback performed better than those who received less. Similarly, lower ability participants who received less feedback performed better than those who received more. While not significant, results suggested there may be some benefit to using the Hybrid approach, but more research is needed to determine the relative effectiveness of this approach.
|
5 |
Improving algorithms of gene prediction in prokaryotic genomes, metagenomes, and eukaryotic transcriptomesTang, Shiyuyun 27 May 2016 (has links)
Next-generation sequencing has generated enormous amount of DNA and RNA sequences that potentially carry volumes of genetic information, e.g. protein-coding genes. The thesis is divided into three main parts describing i) GeneMarkS-2, ii) GeneMarkS-T, and iii) MetaGeneTack.
In prokaryotic genomes, ab initio gene finders can predict genes with high accuracy. However, the error rate is not negligible and largely species-specific. Most errors in gene prediction are made in genes located in genomic regions with atypical GC composition, e.g. genes in pathogenicity islands. We describe a new algorithm GeneMarkS-2 that uses local GC-specific heuristic models for scoring individual ORFs in the first step of analysis. Predicted atypical genes are retained and serve as ‘external’ evidence in subsequent runs of self-training. GeneMarkS-2 also controls the quality of training process by effectively selecting optimal orders of the Markov chain models as well as duration parameters in the hidden semi-Markov model. GeneMarkS-2 has shown significantly improved accuracy compared with other state-of-the-art gene prediction tools.
Massive parallel sequencing of RNA transcripts by the next generation technology (RNA-Seq) provides large amount of RNA reads that can be assembled to full transcriptome. We have developed a new tool, GeneMarkS-T, for ab initio identification of protein-coding regions in RNA transcripts. Unsupervised estimation of parameters of the algorithm makes unnecessary several steps in the conventional gene prediction protocols, most importantly the manually curated preparation of training sets. We have demonstrated that the GeneMarkS-T self-training is robust with respect to the presence of errors in assembled transcripts and the accuracy of GeneMarkS-T in identifying protein-coding regions and, particularly, in predicting gene starts compares favorably to other existing methods.
Frameshift prediction (FS) is important for analysis and biological interpretation of metagenomic sequences. Reads in metagenomic samples are prone to sequencing errors. Insertion and deletion errors that change the coding frame impair the accurate identification of protein coding genes. Accurate frameshift prediction requires sufficient amount of data to estimate parameters of species-specific statistical models of protein-coding and non-coding regions. However, this data is not available; all we have is metagenomic sequences of unknown origin. The challenge of ab initio FS detection is, therefore, twofold: (i) to find a way to infer necessary model parameters and (ii) to identify positions of frameshifts (if any). We describe a new tool, MetaGeneTack, which uses a heuristic method to estimate parameters of sequence models used in the FS detection algorithm. It was shown on several test sets that the performance of MetaGeneTack FS detection is comparable or better than the one of earlier developed program FragGeneScan.
|
6 |
Induction and Transferral of Flow in the Game TetrisO'Neill, Kevin John 17 December 2020 (has links)
No description available.
|
7 |
Leveraging Help Requests In Pomdp Intelligent TutorsFolsom-Kovarik, Jeremiah 01 January 2012 (has links)
Intelligent tutoring systems (ITSs) are computer programs that model individual learners and adapt instruction to help each learner differently. One way ITSs differ from human tutors is that few ITSs give learners a way to ask questions. When learners can ask for help, their questions have the potential to improve learning directly and also act as a new source of model data to help the ITS personalize instruction. Inquiry modeling gives ITSs the ability to answer learner questions and refine their learner models with an inexpensive new input channel. In order to support inquiry modeling, an advanced planning formalism is applied to ITS learner modeling. Partially observable Markov decision processes (POMDPs) differ from more widely used ITS architectures because they can plan complex action sequences in uncertain situations with machine learning. Tractability issues have previously precluded POMDP use in ITS models. This dissertation introduces two improvements, priority queues and observation chains, to make POMDPs scale well and encompass the large problem sizes that real-world ITSs must confront. A new ITS was created to support trainees practicing a military task in a virtual environment. The development of the Inquiry Modeling POMDP Adaptive Trainer (IMP) began with multiple formative studies on human and simulated learners that explored inquiry modeling and POMDPs in intelligent tutoring. The studies suggest the new POMDP representations will be effective in ITS domains having certain common characteristics. iv Finally, a summative study evaluated IMP’s ability to train volunteers in specific practice scenarios. IMP users achieved post-training scores averaging up to 4.5 times higher than users who practiced without support and up to twice as high as trainees who used an ablated version of IMP with no inquiry modeling. IMP’s implementation and evaluation helped explore questions about how inquiry modeling and POMDP ITSs work, while empirically demonstrating their efficacy
|
8 |
Automatic Speech Recognition Model for Swedish using KaldiWang, Yihan January 2020 (has links)
With the development of intelligent era, speech recognition has been a hottopic. Although many automatic speech recognition(ASR) tools have beenput into the market, a considerable number of them do not support Swedishbecause of its small number. In this project, a Swedish ASR model basedon Hidden Markov Model and Gaussian Mixture Models is established usingKaldi which aims to help ICA Banken complete the classification of aftersalesvoice calls. A variety of model patterns have been explored, whichhave different phoneme combination methods and eigenvalue extraction andprocessing methods. Word Error Rate and Real Time Factor are selectedas evaluation criteria to compare the recognition accuracy and speed ofthe models. As far as large vocabulary continuous speech recognition isconcerned, triphone is much better than monophone. Adding feature transformationwill further improve the speed of accuracy. The combination oflinear discriminant analysis, maximum likelihood linear transformand speakeradaptive training obtains the best performance in this implementation. Fordifferent feature extraction methods, mel-frequency cepstral coefficient ismore conducive to obtain higher accuracy, while perceptual linear predictivetends to improve the overall speed. / Det existerar flera lösningar för automatisk transkribering på marknaden, menen stor del av dem stödjer inte svenska på grund utav det relativt få antalettalare. I det här projektet så skapades automatisk transkribering för svenskamed Hidden Markov models och Gaussian mixture models genom att användaKaldi. Detta för att kunna möjliggöra för ICABanken att klassificera samtal tillsin kundtjänst. En mängd av modellvariationer med olika fonemkombinationsmetoder,egenvärdesberäkning och databearbetningsmetoder har utforskats.Word error rate och real time factor är valda som utvärderingskriterier föratt jämföra precisionen och hastigheten mellan modellerna. När det kommertill kontinuerlig transkribering för ett stort ordförråd så resulterar triphonei mycket bättre prestanda än monophone. Med hjälp utav transformationerså förbättras både precisionen och hastigheten. Kombinationen av lineardiscriminatn analysis, maximum likelihood linear transformering och speakeradaptive träning resulterar i den bästa prestandan i denna implementation.För olika egenskapsextraktioner så bidrar mel-frequency cepstral koefficiententill en bättre precision medan perceptual linear predictive tenderar att ökahastigheten.
|
9 |
Speaker adaptation of deep neural network acoustic models using Gaussian mixture model framework in automatic speech recognition systems / Utilisation de modèles gaussiens pour l'adaptation au locuteur de réseaux de neurones profonds dans un contexte de modélisation acoustique pour la reconnaissance de la paroleTomashenko, Natalia 01 December 2017 (has links)
Les différences entre conditions d'apprentissage et conditions de test peuvent considérablement dégrader la qualité des transcriptions produites par un système de reconnaissance automatique de la parole (RAP). L'adaptation est un moyen efficace pour réduire l'inadéquation entre les modèles du système et les données liées à un locuteur ou un canal acoustique particulier. Il existe deux types dominants de modèles acoustiques utilisés en RAP : les modèles de mélanges gaussiens (GMM) et les réseaux de neurones profonds (DNN). L'approche par modèles de Markov cachés (HMM) combinés à des GMM (GMM-HMM) a été l'une des techniques les plus utilisées dans les systèmes de RAP pendant de nombreuses décennies. Plusieurs techniques d'adaptation ont été développées pour ce type de modèles. Les modèles acoustiques combinant HMM et DNN (DNN-HMM) ont récemment permis de grandes avancées et surpassé les modèles GMM-HMM pour diverses tâches de RAP, mais l'adaptation au locuteur reste très difficile pour les modèles DNN-HMM. L'objectif principal de cette thèse est de développer une méthode de transfert efficace des algorithmes d'adaptation des modèles GMM aux modèles DNN. Une nouvelle approche pour l'adaptation au locuteur des modèles acoustiques de type DNN est proposée et étudiée : elle s'appuie sur l'utilisation de fonctions dérivées de GMM comme entrée d'un DNN. La technique proposée fournit un cadre général pour le transfert des algorithmes d'adaptation développés pour les GMM à l'adaptation des DNN. Elle est étudiée pour différents systèmes de RAP à l'état de l'art et s'avère efficace par rapport à d'autres techniques d'adaptation au locuteur, ainsi que complémentaire. / Differences between training and testing conditions may significantly degrade recognition accuracy in automatic speech recognition (ASR) systems. Adaptation is an efficient way to reduce the mismatch between models and data from a particular speaker or channel. There are two dominant types of acoustic models (AMs) used in ASR: Gaussian mixture models (GMMs) and deep neural networks (DNNs). The GMM hidden Markov model (GMM-HMM) approach has been one of the most common technique in ASR systems for many decades. Speaker adaptation is very effective for these AMs and various adaptation techniques have been developed for them. On the other hand, DNN-HMM AMs have recently achieved big advances and outperformed GMM-HMM models for various ASR tasks. However, speaker adaptation is still very challenging for these AMs. Many adaptation algorithms that work well for GMMs systems cannot be easily applied to DNNs because of the different nature of these models. The main purpose of this thesis is to develop a method for efficient transfer of adaptation algorithms from the GMM framework to DNN models. A novel approach for speaker adaptation of DNN AMs is proposed and investigated. The idea of this approach is based on using so-called GMM-derived features as input to a DNN. The proposed technique provides a general framework for transferring adaptation algorithms, developed for GMMs, to DNN adaptation. It is explored for various state-of-the-art ASR systems and is shown to be effective in comparison with other speaker adaptation techniques and complementary to them.
|
Page generated in 0.1122 seconds