• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 762
  • 152
  • 108
  • 105
  • 67
  • 52
  • 25
  • 21
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • Tagged with
  • 1712
  • 646
  • 314
  • 262
  • 225
  • 220
  • 208
  • 184
  • 183
  • 182
  • 179
  • 172
  • 162
  • 156
  • 155
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Um sistema eficiente de detecção da ocorrência de eventos em sinais multimídia. / An efficient system for detecting events in multimidia signals.

Celso de Oliveira 01 July 2008 (has links)
Nos últimos anos tem ocorrido uma necessidade crescente de métodos que possam lidar com conteúdo multimídia em larga escala, e com busca de tais informações de maneira eficiente e efetiva. Os objetos de interesse são representados por vetores descritivos (e. g. cor, textura, geometria, timbre) extraídos do conteúdo, associados a pontos de um espaço multidimensional. Um processo de busca visa, então, encontrar dados similares a uma dada amostra, tipicamente medindo distância entre pontos. Trata-se de um problema comum a uma ampla variedade de aplicações incluindo som, imagens, vídeo, bibliotecas digitais, imagens médicas, segurança, entre outras. Os maiores desafios dizem respeito às dificuldades inerentes aos espaços de alta dimensão, conhecidas por curse of dimensionality, que restringem significativamente a aplicação dos métodos comuns de busca. A literatura recente contém uma variedade de métodos de redução de dimensão que são altamente dependentes do tipo de dado considerado. Constata-se também certa carência de métodos gerais de análise que possam prever com precisão o desempenho dos algoritmos propostos. O presente trabalho contém uma análise geral dos princípios aplicáveis aos sistemas de busca em espaços de alta dimensão. Tal análise permite estabelecer de maneira precisa o compromisso existente entre robustez, refletida principalmente na imunidade a ruído, a taxa de erros de reconhecimento e a dimensão do espaço de observação. Além disto, mostra-se que é possível conceber um método geral de mapeamento, para fins de reconhecimento, que independe de especificidades do conteúdo. Para melhorar a eficiência de busca, um novo método de busca em espaços de alta dimensão é introduzido e analisado. Por fim, descreve-se sumariamente uma realização prática, desenvolvida segundo os princípios discutidos e que atende eficientemente aplicações comerciais de monitoramento de exibição de conteúdo em rádio e TV. / In the last few years there has been an increasing need for methods to deal with large scale multimedia content, and to search such information efficiently and effectively. The objects of interest are represented by feature vectors (e. g. color, texture, geometry, timbre) extracted from the content, associated to points in a multidimensional space. A search process aims, therefore, to find similar data to a given sample, typically measuring distance between points. It is a common problem to a wide range of applications that include sound, image, video, digital library, medical imagery, security, amongst others. The major challenges refer to the difficulties, inherent to the high dimension spaces, known as curse of dimensionality that limit significantly the application of the most common search methods. The recent literature contains a number of dimension reduction methods that are highly dependent on the type of data considered. Besides, there has been a certain lack of general analysis methods that can predict accurately the performance of the proposed algorithms. The present work contains a general analysis of the principles applicable to high dimension space search systems. Such analysis allows establishing in a precise manner the existing tradeoff amongst the system robustness, reflected mainly in the noise immunity, the error rate and the dimension of the observation space. Furthermore, it is shown that it is possible to conceive a mapping method, for recognition purpose, that can be independent of the content specificities. To improve the search efficiency, a new high dimension space search method is introduced and analyzed. Finally, a practical realization is briefly described, which has been developed in accordance with the principles discussed, and that addresses efficiently commercial applications relative to radio and TV content broadcasting monitoring.
742

Gestural musical interfaces using real time machine learning

Dasari, Sai Sandeep January 1900 (has links)
Master of Science / Department of Computer Science / William H. Hsu / We present gestural music instruments and interfaces that aid musicians and audio engineers to express themselves efficiently. While we have mastered building a wide variety of physical instruments, the quest for virtual instruments and sound synthesis is on the rise. Virtual instruments are essentially software that enable musicians to interact with a sound module in the computer. Since the invention of MIDI (Musical Instrument Digital Interface), devices and interfaces to interact with sound modules like keyboards, drum machines, joysticks, mixing and mastering systems have been flooding the music industry. Research in the past decade gone one step further in interacting through simple musical gestures to create, shape and arrange music in real time. Machine learning is a powerful tool that can be smartly used to teach simple gestures to the interface. The ability to teach innovative gestures and shape the way a sound module behaves unleashes the untapped creativity of an artist. Timed music and multimedia programs such as Max/MSP/Jitter along with machine learning techniques open gateways to embodied musical experiences without physical touch. This master's report presents my research, observations and how this interdisciplinary field of research could be used to study wider neuroscience problems like embodied music cognition and human-computer interactions.
743

Videogame como linguagem audiovisual: compreensão e aplicação em um estudo de caso - super street fighter

Corrêa, Francisco Tupy Gomes 27 September 2013 (has links)
A relação entre os aspectos fílmicos e lúdicos dos jogos vão muito além da palavra videogame. Essa terminologia frente à humanidade é muito recente, apenas com algumas dezenas de anos, porém, trata-se de algo que converge em símbolos, técnicas e práticas, revestindo tais itens contemporâneos e impactando a sociedade significativamente. Logo, configura-se como um objeto de estudo capaz de promover reflexões diversas. No caso desta pesquisa, consideramos que nessa expressão do ato de jogar existe muito de game e pouco de vídeo. As abordagens focando as narrativas e as mecânicas muitas vezes deixam uma lacuna referente aos processos audiovisuais. Em função desta observação, foi realizado um estudo de cunho hipotético-dedutivo e metapórico preconizando uma visão integradora de três métodos distintos: a Ludologia, a Narratologia e a Linguagem Audiovisual. A motivação do título escolhido para o estudo de caso, Super Street Fighter IV, ocorreu por ser um jogo popular, dentro de um gênero típico de jogabilidade, que há mais de duas décadas se reinventa para agradar os fãs. Além disso, dialoga diretamente com temas culturais (costumes asiáticos e artes marciais) e temas cinematográficos (filmes de luta e o ícone representado por Bruce Lee). O estudo focou os resultados visando contribuir para uma compreensão do videogame, de modo que a pesquisa realizada pudesse trazer parâmetros tanto para aprofundar elementos presentes em sua linguagem quanto para o desenvolvimento de questões de ligadas à sua realização. / The relation between cinematographic and ludic aspects of games go well beyond the Word videogame. This term is very recent in human history, having appeared only a few decades ago. However, it is something the converges in symbols, techniques and practices, labelling these contemporary items and significantly impacting society. For this reason, it presents itself as a study subject that is capable of promoting diverse reflections and discussions. In this research, we considered that in this expression of the act of playing there is much more game than video. Approaches that focus on narratives and mechanics often don\'t refer to audiovisual processes. Based on this observation, we devised a hypothetic-deductive, and metaphoric study that considers three distinct methods: Ludology, Narratology, and Audiovisual Language. The motivation behind the chosen subject of study, Super Street Fighter IV, comes form it being a popular game, from a genre that is mainly based on playability, with more than two decades of reinvention in order to please fans. Besides, it directly dialogs with cultural (Asiatic costumes and martial arts) and cinematographic themes (fight movies and the popular icon Bruce Lee). The study focused the results in order to contribute to a comprehension of video-games, in a way that the research could bring parameters both to deepen elements that help better understanding this language, and to aid the development of questions concerned to realization.
744

Testing Tradition

Daniel, Boner, East Tennessee State University Bluegrass Band 01 January 2012 (has links)
Smithhaven--Still Making Excuses--If Seeing Is Believing--Myth--Your Last Ride--If I Had a Dollar--Tack and Jibe--Stewie Took My Nose--March Home to Me--I Recall--God's Work Is Never Done--Farther Down the Track. / https://dc.etsu.edu/etsu_books/1101/thumbnail.jpg
745

The Johnson City Sessions 1928-1929: Can You Sing Or Play Old-Time Music?

Olson, Ted 01 January 2013 (has links)
The Johnson City Sessions were held in Johnson City, Tennessee in October 1928 and October 1929. This work "...marks the first time these recordings have been assembled in any format. Collectively, these 100 songs and tunes are regarded by scholars and record collectors as a strong and distinctive cross-section of old-time Appalachian music just before the Great Depression. The four CDs gather every surviving recording from the sessions, while the accompanying 136-page LP-sized hardcover book contains newly researched essays on the background to the sessions and on the individual artists, with many rare and hitherto unpublished photographs, as well as complete song lyrics and a detailed discography." -- Back cover. Ted Olson (East Tennessee State University) and Tony Russell are the re-issue producers. / https://dc.etsu.edu/etsu_books/1112/thumbnail.jpg
746

Blind Alfred Reed: Appalachian Visionary

Olson, Ted 01 January 2015 (has links)
Liner notes by Ted Olson, song lyrics, and discography; produced by Ted Olson. "In this collection, all of Reed’s songs, both faith-based and secular, recorded for the Victor Talking Machine Company over two sessions in 1927 in Bristol TN and Camden, NJ and two sessions in 1929 in New York City, are on one 22 track CD, complemented by well researched essays by Producer Ted Olson and LOTS of archival photos. Reed played fiddle and sang and on some sessions he was accompanied on guitar by his son Orville. ... Olson has included the younger Reed’s solo recordings." --Steve Ramm Review on Amazon / https://dc.etsu.edu/etsu_books/1117/thumbnail.jpg
747

The orchestration of modes and EFL audio-visual comprehension: A multimodal discourse analysis of vodcasts

Norte Fernández-Pacheco, Natalia 27 January 2016 (has links)
This thesis explores the role of multimodality in language learners’ comprehension, and more specifically, the effects on students’ audio-visual comprehension when different orchestrations of modes appear in the visualization of vodcasts. Firstly, I describe the state of the art of its three main areas of concern, namely the evolution of meaning-making, Information and Communication Technology (ICT), and audio-visual comprehension. One of the most important contributions in the theoretical overview is the suggested integrative model of audio-visual comprehension, which attempts to explain how students process information received from different inputs. Secondly, I present a study based on the following research questions: ‘Which modes are orchestrated throughout the vodcasts?’, ‘Are there any multimodal ensembles that are more beneficial for students’ audio-visual comprehension?’, and ‘What are the students’ attitudes towards audio-visual (e.g., vodcasts) compared to traditional audio (e.g., audio tracks) comprehension activities?’. Along with these research questions, I have formulated two hypotheses: Audio-visual comprehension improves when there is a greater number of orchestrated modes, and students have a more positive attitude towards vodcasts than traditional audios when carrying out comprehension activities. The study includes a multimodal discourse analysis, audio-visual comprehension tests, and students’ questionnaires. The multimodal discourse analysis of two British Council’s language learning vodcasts, entitled English is GREAT and Camden Fashion, using ELAN as the multimodal annotation tool, shows that there are a variety of multimodal ensembles of two, three and four modes. The audio-visual comprehension tests were given to 40 Spanish students, learning English as a foreign language, after the visualization of vodcasts. These comprehension tests contain questions related to specific orchestrations of modes appearing in the vodcasts. The statistical analysis of the test results, using repeated-measures ANOVA, reveal that students obtain better audio-visual comprehension results when the multimodal ensembles are constituted by a greater number of orchestrated modes. Finally, the data compiled from the questionnaires, conclude that students have a more positive attitude towards vodcasts in comparison to traditional audio listenings. Results from the audio-visual comprehension tests and questionnaires prove the two hypotheses of this study.
748

Reflections on the use of a smartphone to facilitate qualitative research in South Africa

Matlala, Sogo France, Matlala, Makoko Neo 10 January 2018 (has links)
Journal article published in The Qualitative Report 2018 Volume 23, Number 10, How to Article 2, 2264-2275 / This paper describes conditions that led to the use of a smartphone to collect qualitative data instead of using a digital voice recorder as the standard device for recording of interviews. Through reviewing technical documents, the paper defines a smartphone and describes its applications that are useful in the research process. It further points out possible uses of other applications of a smartphone in the research process. The paper concludes that a smartphone is a valuable device to researchers
749

No Foolin? Fake News and A.I. Manipulation of Audio, Video, and Images

Tolley, Rebecca 09 February 2019 (has links)
No description available.
750

Audio-tactile displays to improve learnability and perceived urgency of alarming stimuli

Momenipour, Amirmasoud 01 August 2019 (has links)
Based on cross-modal learning and multiple resources theory, human performance can be improved by receiving and processing additional streams of information from the environment. In alarm situations, alarm meanings need to be distinguishable from each other and learnable for users. In audible alarms, by manipulating the temporal characteristics of sounds different audible signals can be generated. However, in some cases such as in using discrete medical alarms, when there are too many audible signals to manage, changes in temporal characteristics may not generate discriminable signals that would be easy for listeners to learn. Multimodal displays can be developed to generate additional auditory, visual, and tactile stimuli for helping humans benefit from cross-modal learning and multiple attentional resources for a better understanding of the alarm situations. In designing multimodal alarm displays in work domains where the alarms are predominantly auditory-based and where accessing visual displays is not possible at all times, tactile displays can enhance the effectiveness of alarms by providing additional streams of information for understanding the alarms. However, because of low information density of tactile information presentation, the use of tactile alarms has been limited. In this thesis, by using human subjects, the learnability of auditory and tactile alarms, separately and together in an audio-tactile display were studied. The objective of the study was to test cross-modal learning when messages of an alarm (i.e. meaning, urgency level) were conveyed simultaneously in audible, tactile and audio-tactile alarm displays. The alarm signals were designed by using spatial characteristics of tactile, and temporal characteristics of audible signals separately in audible and tactile displays as well as together in an audio-tactile display. This study explored if using multimodal alarms (tactile and audible) would help learning unimodal (audible or tactile) alarm meanings and urgency levels. The findings of this study can help for design of more efficient discrete audio-tactile alarms that promote learnability of alarm meanings and urgency levels.

Page generated in 0.0447 seconds