• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 23
  • 17
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 3
  • 3
  • Tagged with
  • 303
  • 120
  • 77
  • 58
  • 32
  • 25
  • 24
  • 22
  • 21
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Short music biography of Bosnia and Herzegovina (1878–1918): concert recordings (October 24, 2014); Academy of Music in Sarajevo, Sarajevo, 2014; Maja A ́vckar Zlatarević, piano; Concert organizer and editor Dr. Lana Paćuka, [Rezension]

Kokanović Marković, Marijana 08 May 2020 (has links)
Die CD Kratka muzička biografija Bosne i Hercegovine (1878–1918) [dt.: Eine kurze musikalische Biografie von Bosnien und Herzegowina] entstand dank der Forschungsarbeiten der jungen bosnisch-herzegowinischen Musikwissenschaftlerin Lana Paćuka, Dozentin an der Musikakademie in Sarajevo.
172

Integrating lecture recording to support flexible learning and responsive pedagogies in a dual mode undergraduate law degree

Prinsloo, Heinrich 14 February 2020 (has links)
This study investigates the integration of lecture recordings to support flexible learning and responsive pedagogical approaches in an undergraduate LLB degree presented in a dual mode (face-to-face and online) by the University of the Free State’s Faculty of Law. In this faculty, lecture recording is observed by compulsory integration in all classes; the only options pertain to three basic software tools. According to literature, integrating lecture recording can bring about flexibility in student learning, and flexibility can have both positive and negative implications for student learning. This study uses Puentedura’s (2006) SAMR (Substitution, Augmentation, Modification and Redefinition) model as a theoretical lens to analyse different levels or types of integration of lecture recording by students and lecturers. The SAMR categories assisted the study to identify whether Substitution, Augmentation, Modification or Redefinition were present when students and lecturers integrated lecture recording in teaching and learning. The study implements a mixed-method research approach that included student and lecturer surveys, lecturer interviews, and telephonic interviews and focus group discussions with students. Findings indicate that students’ overall experience of lecture recording was that it enhanced their learning and gave them flexibility regarding how, where, when they could learn. Some lecturers claimed that lecture recording enhances their teaching methodology, and that it can have an impact on their students’ learning. Lecturers agreed that lecture recording can be applied and integrated to transform the way they teach. Lecturers also indicated that lecture recording, in the form of audio recordings of lectures, in some instances caused students to hold lecturers accountable, not always fairly, for their utterances in class. Both staff and students indicated that they had concerns about class attendance when lecture recording was used, regardless of whether lectures were recorded when presented online or face-to-face. The study found that campus-based and online students integrated lecture recordings as part of their learning experiences in a variety of ways. The majority of campus-based students reported using lecture recordings to augment their learning experiences, especially in relation to how and whether they attended faceto-face lectures. Modification strategies for online students included making use of lecture recordings as a substitute for their presence at face-to-face lectures. Some online students reported that engaging with lecture recordings made them feel part of the course and its community of students. Lecturers’ specific approaches to teaching play a considerable role in the way they experience lecture recording and the way they integrate it in their courses. In addition to survey findings, the study also presents lecturer views, to illustrate some of these variations and interplays. While some lecturers reported that using lecture recordings has completely transformed the way they teach, others admitted that if they had a choice, they would not use lecture recordings in their teaching. The study offers a contextual account of lecture recording integration and contributes to global debates around lecture recording. Student and lecturer experiences with lecture recording, as observed through various SAMR levels of integration, depend on the type of lecture recording tool and software used, beliefs relating to the purpose of a lecture, regardless of its mode of delivery, and the reason for recording it in the first place. The study contributes to a local understanding of lecture recording integration and stimulates new dialogue that could guide future integration of lecture recording technologies, locally and internationally.
173

Deep networks for sign language video caption

Zhou, Mingjie 12 August 2020 (has links)
In the hearing-loss community, sign language is a primary tool to communicate with people while there is a communication gap between hearing-loss people with normal hearing people. Sign language is different from spoken language. It has its own vocabulary and grammar. Recent works concentrate on the sign language video caption which consists of sign language recognition and sign language translation. Continuous sign language recognition, which can bridge the communication gap, is a challenging task because of the weakly supervised ordered annotations where no frame-level label is provided. To overcome this problem, connectionist temporal classification (CTC) is the most widely used method. However, CTC learning could perform badly if the extracted features are not good. For better feature extraction, this thesis presents the novel self-attention-based fully-inception (SAFI) networks for vision-based end-to-end continuous sign language recognition. Considering the length of sign words differs from each other, we introduce the fully inception network with different receptive fields to extract dynamic clip-level features. To further boost the performance, the fully inception network with an auxiliary classifier is trained with aggregation cross entropy (ACE) loss. Then the encoder of self-attention networks as the global sequential feature extractor is used to model the clip-level features with CTC. The proposed model is optimized by jointly training with ACE on clip-level feature learning and CTC on global sequential feature learning in an end-to-end fashion. The best method in the baselines achieves 35.6% WER on the validation set and 34.5% WER on the test set. It employs a better decoding algorithm for generating pseudo labels to do the EM-like optimization to fine-tune the CNN module. In contrast, our approach focuses on the better feature extraction for end-to-end learning. To alleviate the overfitting on the limited dataset, we employ temporal elastic deformation to triple the real-world dataset RWTH- PHOENIX-Weather 2014. Experimental results on the real-world dataset RWTH- PHOENIX-Weather 2014 demonstrate the effectiveness of our approach which achieves 31.7% WER on the validation set and 31.2% WER on the test set. Even though sign language recognition can, to some extent, help bridge the communication gap, it is still organized in sign language grammar which is different from spoken language. Unlike sign language recognition that recognizes sign gestures, sign language translation (SLT) converts sign language to a target spoken language text which normal hearing people commonly use in their daily life. To achieve this goal, this thesis provides an effective sign language translation approach which gains state-of-the-art performance on the largest real-life German sign language translation database, RWTH-PHOENIX-Weather 2014T. Besides, a direct end-to-end sign language translation approach gives out promising results (an impressive gain from 9.94 to 13.75 BLEU and 9.58 to 14.07 BLEU on the validation set and test set) without intermediate recognition annotations. The comparative and promising experimental results show the feasibility of the direct end-to-end SLT
174

Neural Correlates of Directional Hearing following Noise-induced Hearing Loss in the Inferior Colliculus of Dutch-Belted Rabbits

Haragopal, Hariprakash 22 September 2020 (has links)
No description available.
175

Reward Comparison in the Striatum

Webber, Emily S. 08 August 2013 (has links)
No description available.
176

Phoneme-based Video Indexing Using Phonetic Disparity Search

Barth, Carlos Leon 01 January 2010 (has links)
This dissertation presents and evaluates a method to the video indexing problem by investigating a categorization method that transcribes audio content through Automatic Speech Recognition (ASR) combined with Dynamic Contextualization (DC), Phonetic Disparity Search (PDS) and Metaphone indexation. The suggested approach applies genome pattern matching algorithms with computational summarization to build a database infrastructure that provides an indexed summary of the original audio content. PDS complements the contextual phoneme indexing approach by optimizing topic seek performance and accuracy in large video content structures. A prototype was established to translate news broadcast video into text and phonemes automatically by using ASR utterance conversions. Each phonetic utterance extraction was then categorized, converted to Metaphones, and stored in a repository with contextual topical information attached and indexed for posterior search analysis. Following the original design strategy, a custom parallel interface was built to measure the capabilities of dissimilar phonetic queries and provide an interface for result analysis. The postulated solution provides evidence of a superior topic matching when compared to traditional word and phoneme search methods. Experimental results demonstrate that PDS can be 3.7% better than the same phoneme query, Metaphone search proved to be 154.6% better than the same phoneme seek and 68.1 % better than the equivalent word search.
177

High Schoolers' Approaches to Learning Melodies by Ear

Oswald, Peter January 2022 (has links)
Aural learning, sometimes called “learning by ear,” is a fundamental mechanism of music, connected to musical perception, acquisition, and understanding. Researchers have primarily studied aural learning strategies through self-reported data or qualitative observations. Because the interaction between a learner and a recording offers a unique window into self-guided learning approaches and strategies, the aim of this study was to use participants’ interactions with the recordings as a data source. The purpose of this study was to investigate how high schoolers aurally learn unfamiliar melodies and identify trends that contribute to efficient learning.Twenty-nine high-school participants in individual sessions learned three different melodies by ear. As participants learned each melody, I used a modified, digital playback interface to collect interaction data on three learning constructs from the literature: (a) learning chunk length; (b) learning chunk order; and (c) synchronous versus turn-taking. Descriptive results showed that participants preferred to learn melodies in one-, two-, four-, or eight-measure chunks, and that their use of time learning either in a synchronously or turn-taking approach had no relationship to their total learning time. A Spearman Rank Order correlation revealed a moderate, inverse relationship between average chunk length and total learning time (Rho = -.506, p < .001) suggesting that participants who focused on learning larger chunks learned the whole melody faster. An analysis of participants’ choice of learning chunk order revealed three general approaches to the task. Participants used a “From the Beginning” approach approximately 14% of the time, characterized by repeatedly starting from the beginning and increasing the length of the learning chunk each repetition. Participants used a “Half to Whole” approach approximately 29% of the time, characterized by focusing on half of the melody at a time. Finally, participants most frequently used a “Bit by Bit” approach 57% of the time, characterized by learning short one- to three-measures chunks progressing from the beginning of the melody to the end. Most participants began and ended their learning session by listening to the entire melody. An ANOVA comparing approaches showed that the “Half to Whole” approach was significantly more effective than the “Bit-by-Bit” (F[2,66] = 10.25, p < .001), but showed no differences between other approaches. Some participants made notable changes in their approach between melodies showing some isolated examples of improvement when they chose longer chunks and switched to a “Half to Whole” approach. The approaches that emerged from this study provide a foundation for future experimental research on the way students best learn from recordings. / Music Education
178

Inspelningsutrustning som verktyg i övningsrummet

Brynolf, Max January 2023 (has links)
I detta arbete undersöks den musikaliska medvetenheten med hjälp av inspelningsutrustning, genom att identifiera aspekter i ens spel som inte märktes förrän genomlyssning i efterhand. Den moderna musikerns förutsättningar skiljer sig markant från tidigare, där det nu är enklare än någonsin att göra en inspelning med mobilmikrofonen. Genom att spela in stycken, lyssna på dem och föra anteckningar, identifierades flertal konkreta aspekter i pianospelet som inte märktes vid själva genomspelningen. Dessa tilldelades kategorierna: ”precision”, ”balans”, ”kroppsspråk”, ”agogik och tolkning”, ”frasering" och ”tempo och rytm". Vad gäller precision och balans erhölls inga större insikter utöver att feltoners betydelse kunde underskattas vid enstaka fall samt att jämnhet i anslaget var viktigt. I kroppsspråket fanns det en generell tendens att positiva rörelser gjordes i underdrift och de rytmiska aspekterna berörde ofta instabiliteter av olika slag, exempelvis ojämna rubato eller plötsliga tempoändringar. Slutligen handlade det agogiska och interpretativa ofta om att tydliggöra musikaliska strukturer, vilket gav en signifikant förbättring i upplevelse. Genom att aktivt arbeta med denna analysmetod märktes på sikt en förbättring i ens musikaliska självinsikt, där medvetenheten blev allt bättre med tiden. / In this project, my musical self-awareness is examined with the help of various recording devices, by identifying different aspects in the music that weren’t noticed until the recording was listened to. The possibilities given to the modern musician differ significantly from earlier times, where it is now easier than ever to make a recording with your mobile device. By recording pieces, listening to them and taking notes, several concrete aspects in my piano playing that weren’t noticed when playing through were identified. These were assigned the categories: ”precision”, ”balance", ”body language”, ”agogic and interpretation”, ”phrasing” and ”tempo and rhythm”. When it comes to precision and balance, no interesting insights were made apart from the significance of certain wrong notes as well as the importance of evenness in touch. The body language had the general tendency of not being exaggerated enough and the rhythmical aspects often concerned different instabilities, such as uneven rubato or sudden tempo changes. Lastly, the key to improving agogic and interpretation often lied in making musical structures more clear. By actively working with this method of analysis, one’s musical self-awareness was gradually improved through time. / <p>Den klingande delen är arkiverad.</p>
179

Learning Video Representation from Self-supervision

Chen, Brian January 2023 (has links)
This thesis investigates the problem of learning video representations for video understanding. Previous works have explored the use of data-driven deep learning approaches, which have been shown to be effective in learning useful video representations. However, obtaining large amounts of labeled data can be costly and time-consuming. We investigate self-supervised approach as for multimodal video data to overcome this challenge. Video data typically contains multiple modalities, such as visual, audio, transcribed speech, and textual captions, which can serve as pseudo-labels for representation learning without needing manual labeling. By utilizing these modalities, we can train deep representations over large-scale video data consisting of millions of video clips collected from the internet. We demonstrate the scalability benefits of multimodal self-supervision by achieving new state-of-the-art performance in various domains, including video action recognition, text-to-video retrieval, and text-to-video grounding. We also examine the limitations of these approaches, which often rely on the association assumption involving multiple modalities of data used in self-supervision. For example, the text transcript is often assumed to be about the video content, and two segments of the same video share similar semantics. To overcome this problem, we propose new methods for learning video representations with more intelligent sampling strategies to capture samples that share high-level semantics or consistent concepts. The proposed methods include a clustering component to address false negative pairs in multimodal paired contrastive learning, a novel sampling strategy for finding visually groundable video-text pairs, an investigation of object tracking supervision for temporal association, and a new multimodal task for demonstrating the effectiveness of the proposed model. We aim to develop more robust and generalizable video representations for real-world applications, such as human-to-robot interaction and event extraction from large-scale news sources.
180

Reassessing a Legacy: Rachmaninoff in America, 1918 – 43

Gehl, Robin S. January 2008 (has links)
No description available.

Page generated in 0.0597 seconds