• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 140
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 678
  • 135
  • 124
  • 113
  • 102
  • 98
  • 82
  • 75
  • 71
  • 70
  • 62
  • 58
  • 46
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Gesto musical e o uso de interfaces físicas digitais na performance do livre electronics / -

Mauricio Perez 07 October 2016 (has links)
Este trabalho analisa o uso de interfaces físicas digitais na criação e performance da música eletroacústica em tempo real, sobretudo pelo conceito de gesto musical. Para tal propósito, primeiramente, foi realizado um estudo sistemático de dois objetos centrais para pesquisa, a saber, as interfaces digitais e o conceito de gesto em música. Revisitamos alguns elementos sobre a construção de instrumentos musicais digitais, acrescentado novas perspectivas à luteria digital a partir da concepção de gesto musical, como por exemplo, na concepção de mapeamento. Além disso, levantamos algumas questões estéticas referentes tanto a compreensão destas interfaces como instrumentos musicais quanto seu uso na composição musical e na performance do live electronics. O conceito de gesto musical, por usa vez, é compreendido neste trabalho como uma questão emergente da prática musical na contemporaneidade. Apontamos para os diferentes entendimentos das pesquisas em música sobre os significados contidos neste conceito, como suas dimensões corporal e sonora e suas capacidades cinética e semântica. Assim, expandimos o conceito de gesto musical em um contexto que utiliza estas interfaces para ideias como as de corporalidade, fisicalidade e causalidade. Posteriormente, propomos a junção entre os elementos presentes no conceito de gesto musical com os elementos constitutivos das interfaces e com a prática de criação musical e performance mediada por elas, principalmente sobre o ponto de vista da causalidade. Desta maneira reconhecemos que o relacionamento entre as ações corporais e os movimentos sonoros contribuem para a significação musical nas práticas que utilizam interfaces físicas digitais. Identificamos que estas relações de causalidade podem se apresentar desde modelos físico-naturais de coerência gestual, como em relações artificiais entre gesto e som e seus substituintes. Finalmente, é apresentada uma metodologia de análise para performances que utilizam estas interfaces, como aqui compreendidas, que contemplam tanto como a interface se apresenta para o músico que a toca quanto como a relação entre performer e interface podem ser compreendidas pelo espectador-ouvinte. Estas proposições demonstram como as interfaces estão inseridas em um contexto que considera o corpo como um elemento estético na criação da música eletroacústica ao vivo. / This research analyzes the use of digital physical interfaces in the creation and performance of electroacoustic music in real time, especially the concept of musical gesture. For this purpose, first, we conducted a systematic study of two central objects for research, namely, digital interfaces and the concept of gesture in music. We revisit some core elements of the construction of digital musical instruments, added new perspectives to digital luthiery from the concept of musical gesture as the mapping. In addition, we raised some aesthetic issues both an understanding of these interfaces as musical instruments as their use in musical composition and performance of live electronics. The concept of musical gesture, in turn, is understood in this study as an emerging issue of musical practice nowadays. We pointed to the different understandings of research in music about the meanings contained in this concept, as body and sound dimensions and kinetic and semantic capabilities. Thus, we have expanded the concept of musical gesture in a context that uses these interfaces to ideas such as corporeality, physicality and causality. Subsequently, we propose the junction between the elements present in the concept of musical gesture with the constituent elements of the interfaces and the practice of music creation and performance mediated by them, especially on the point of view of causality. Thus we recognize that the relationship between bodily actions and sound movements contribute to the musical significance in practices that use digital physical interfaces. We identify that these causal relationships may present from physical and natural models of gestural coherence, as artificial relationship between gesture and sound and their surrogates Finally, it is presented a methodology for performances that use these interfaces, as here understood, which include both how the interface is presented to the musician that plays much like the relationship between performer and interface can be understood by the viewer-listener. These proposals demonstrate how the interfaces are inserted in a context that considers the body as an aesthetic element in the practice of the live electroacoustic music.
372

Modelo abrangente e reconhecimento de gestos com as mãos livres para ambientes 3D. / Comprehensive model and gesture recognition with free hands for 3d environments.

João Luiz Bernardes Júnior 18 November 2010 (has links)
O principal objetivo deste trabalho é possibilitar o reconhecimento de gestos com as mãos livres, para uso em interação em ambientes 3D, permitindo que gestos sejam selecionados, para cada contexto de interação, dentre um grande conjunto de gestos possíveis. Esse grande conjunto deve aumentar a probabilidade de que se possa selecionar gestos já existentes no domínio de cada aplicação ou com associações lógicas claras com as ações que comandam e, assim, facilitar o aprendizado, memorização e uso dos gestos. Estes são requisitos importantes para aplicações em entretenimento e educação, que são os principais alvos deste trabalho. Propõe-se um modelo de gestos que, baseado em uma abordagem linguística, os divide em três componentes: postura e movimento da mão e local onde se inicia. Combinando números pequenos de cada um destes componentes, este modelo permite a definição de dezenas de milhares de gestos, de diferentes tipos. O reconhecimento de gestos assim modelados é implementado por uma máquina de estados finitos com regras explícitas que combina o reconhecimento de cada um de seus componentes. Essa máquina só utiliza a hipótese que os gestos são segmentados no tempo por posturas conhecidas e nenhuma outra relacionada à forma como cada componente é reconhecido, permitindo seu uso com diferentes algoritmos e em diferentes contextos. Enquanto este modelo e esta máquina de estados são as principais contribuições do trabalho, ele inclui também o desenvolvimento de algoritmos simples mas inéditos para reconhecimento de doze movimentos básicos e de uma grande variedade de posturas usando equipamento bastante acessível e pouca preparação. Inclui ainda um framework modular para reconhecimento de gestos manuais em geral, que também pode ser aplicado a outros domínios e com outros algoritmos. Além disso, testes realizados com usuários levantam diversas questões relativas a essa forma de interação. Mostram também que o sistema satisfaz os requisitos estabelecidos. / This work\'s main goal is to make possible the recognition of free hand gestures, for use in interaction in 3D environments, allowing the gestures to be selected, for each interaction context, from a large set of possible gestures. This large set must increase the probability of selecting a gesture which already exists in the application\'s domain or with clear logic association with the actions they command and, thus, to facilitate the learning, memorization and use of these gestures. These requirements are important to entertainment and education applications, this work\'s main targets. A gesture model is proposed that, based on a linguistic approach, divides them in three components: hand posture and movement and the location where it starts. Combining small numbers for each of these components, this model allows the definition of tens of thousands of gestures, of different types. The recognition of gestures so modeled is implemented by a finite state machine with explicit rules which combines the recognition of each of its components. This machine only uses the hypothesis that gestures are segmented in time by known posture, and no other related to the way in which each component is recognized, allowing its use with different algorithms and in different contexts. While this model and this finite state machine are this work\'s main contributions, it also includes the development of simple but novel algorithms for the recognition of twelve basic movements and a large variety of postures requiring highly accessible equipment and little setup. It likewise includes the development of a modular framework for the recognition of hand gestures in general, that may also be applied to other domains and algorithms. Beyond that, tests with users raise several questions about this form of interaction. They also show that the system satisfies the requirements set for it.
373

O gesto na clínica fonoaudiológica: um estudo sob o olhar da análise discursiva materialista / Gesture and speech therapy clinics on the light of the materialist discourse analysis

Kitahara, Michelle Fogaça de Oliveira 01 March 2018 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2018-04-03T11:16:19Z No. of bitstreams: 1 Michelle Fogaça de Oliveira Kitahara.pdf: 2074708 bytes, checksum: 965da08eff43e5115a1ae2d2fb9576f0 (MD5) / Made available in DSpace on 2018-04-03T11:16:19Z (GMT). No. of bitstreams: 1 Michelle Fogaça de Oliveira Kitahara.pdf: 2074708 bytes, checksum: 965da08eff43e5115a1ae2d2fb9576f0 (MD5) Previous issue date: 2018-03-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Gesture has been exhaustively mentioned in the international scientific literature under different approaches and therapeutic methods. This dissertation aims to investigate the gesture in the speech pathologists’ clinics together with the dominant ideology on the light of the theory and methodology known as Materialist Discourse Analysis. Twelve speech therapists that work in different clinical areas were interviewed in search of the main discursive thread that support their discourse. The semi-open interviews were recorded and the discursive data transcribed. Fragments were extracted and analyzed from the perspective pointed out above. The analysis reveals that the conducting thread of the therapists’ discourse is the positivist ideology of Science, which fragments the subjects, body and language, allocating speech and gesture in a hierarchy system where gesture is subordinated to the former. The language materiality reveals an unconscious identification of the therapists with the significant “Fono-Speech, Audio-Audio, Logia-Study” (Speech Therapist) since the return of speech and their professional identity is brought up. The deaf perspective approach appears to be related to the discourse about gesture, where LIBRAS (Brazilian Sign Language), as a codified way of language was considered a threat to the speech statute. It was also observed a broader meaning of the term speech, different from those only related to the articulation of sounds to that one affecting the others through the language of hands/gestures. Under such perspective, there are also formations that challenge the dominant ideology welcoming gesture as an important tool in the clinics, in the evaluation and treatment. Collected data was also supplemented with literature review of the last 10 years using the key-words gesture/s in the main Brazilian speech pathologist journals. 23 articles have been found where the majority refers to gesture in the context of clinical care of aphasic patients, autistic and other specific syndromes beyond the understanding of the language acquisition process / O gesto tem sido abordado exaustivamente nas pesquisas internacionais sob diferentes óticas e abordagens terapêuticas. Essa dissertação busca investigar o gesto na clínica fonoaudiológica e a ideologia dominante à luz da teoriametodológica da análise discursiva materialista. O procedimento para a coleta de dados foi uma entrevista semi-aberta com 12 fonoaudiólogas que atuam em diferentes campos clínicos em busca do fio discursivo que sustenta o discurso da Fonoaudiologia. A coleta de dados foi gravada em áudio e vídeo e dos dados discursivos transcritos, extraiu-se fragmentos que foram analisados sob a ótica apontada acima. A análise discursiva indica que o fio condutor do discurso dos fonoaudiólogos é a ideologia positivista da ciência que fragmenta os sujeitos, os corpos e a linguagem e hierarquiza fala e gesto, sendo que este último encontrase subordinado ao primeiro. A materialidade da língua revela uma identificação inconsciente das terapeutas com o significante Fono-audio-logia, na medida em que o retorno à fala e a sua identidade profissional são trazidos à tona. O atendimento à surdos aparece amalgamado ao discurso sobre o gesto, em que este como forma de comunicação codificada na Língua Brasileira de Sinais- LIBRAS, representa uma ameaça ao estatuto da fala. Ainda sobre o significante fala, observou-se um deslizamento de sentidos daquele somente relacionado a articulação dos sons para aquilo que afeta o outro pela linguagem das mãos, dos gestos. Sob esta ótica, há também formações que contestam a ideologia dominante e abrem espaço para acolher o gesto na clínica, na avaliação e na terapêutica. Complementou-se a análise dos dados com um levantamento de trabalhos a partir das palavras-chave gesto/s nos principais periódicos nacionais de Fonoaudiologia dos últimos 10 anos. Foram encontrados 23 artigos, onde a maioria se refere ao gesto no atendimento clínico de pacientes afásicos, autistas e pessoas com outras síndromes específicas, além da busca deste objeto no entendimento do processo de aquisição de linguagem
374

Affective Gesture Fast-track Feedback Instant Messaging (AGFIM)

Adesemowo, Kayode January 2005 (has links)
<p>Text communication is often perceived as lacking some components of communication that are essential in sustaining interaction or conversation. This interaction incoherency tends to make&nbsp / text communication plastic. It is traditionally devoid of intonation, pitch, gesture, facial expression and visual or auditory cues. Nevertheless, Instant Messaging (IM), a form of text communication is on the upward uptake both on PCs and on mobile handhelds. There is a need to rubberise this plastic text messaging to improve co-presence for text communications thereby improving&nbsp / synchronous textual discussion, especially on handheld devices. One element of interaction is gesture, seen as a natural way of conversing. Attaining some level of interaction naturalism&nbsp / requires improving synchronous communication spontaneity, partly achieved by enhancing input mechanisms. To enhance input mechanisms for interactive text-based chat on mobile devices,&nbsp / there is a need to facilitate gesture input. Enhancement is achievable in a number of ways, such as input mechanism redesigning and input offering adaptation. This thesis explores affective gesture mode on interface redesign as an input offering adaptation. This is done without a major physical reconstruction of handheld devices. This thesis presents a text only IM system built on&nbsp / Session Initiation Protocol (SIP) and SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE). It was developed with a novel user-defined hotkey implemented as a one-click context menu to &ldquo / fast-track&rdquo / text-gestures and emoticons. A hybrid quantitative and qualitative approach was taken to enable data triangulation. Results from experimental trials show that an&nbsp / Affective Gesture (AG)approach improved IM chat spontaneity/response. Feedback from the user trials affirms that AG hotkey improves chat responsiveness, thus enhancing chat spontaneity.</p>
375

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader 08 November 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
376

Einsatz der elektronischen Patientenakte im Operationssaal am Beispiel der HNO-Chirurgie

Dressler, Christian 04 June 2013 (has links) (PDF)
Wenn ein Chirurg heutzutage während der Operation Informationen aus der Patientenakte benötigt, ist er gezwungen, sich entweder unsteril zu machen oder Personal anzuweisen, ihm die entspre-chenden Informationen zugänglich zu machen. Aus technischer Sicht ist ein System zur intraoperati-ven Bedienung und Darstellung sehr einfach zu realisieren. Grundlage dafür ist eine elektronische Patientenakte (EPA), welche beispielsweise softwaregenerierten oder eingescannten Dokumenten verwaltet. Die vorliegende Arbeit widmet sich den folgenden Fragen: Wird ein solches System im Operationssaal sinnvoll genutzt? Welche Methoden zur sterilen Bedienung kommen infrage? Wie muss die grafische Darstellung auf den Operationssaal abgestimmt werden? Kann durch das Imple-mentieren aktueller Kommunikationsstandards auf alle verfügbaren Patientendaten zugegriffen werden? Dazu wurden in einer ambulanten HNO-Klinik zwei Pilotstudien durchgeführt. In der ersten Studie wurde das erste auf dem Markt befindliche kommerzielle Produkt „MI-Report“ der Firma Karl Storz evaluiert, welches per Gestenerkennung bedient wird. Für die zweite Studie wurde ein EPA-System entwickelt (Doc-O-R), welches eine Vorauswahl der angezeigten Dokumente in Abhängigkeit des Eingriffs traf und mit einem Fußschalter bedient werden konnte. Pro System wurden ca. 50 Eingriffe dokumentiert. Dabei wurde jedes angesehene Dokument und der Nutzungsgrund protokolliert. Die Systeme wurden durchschnittlich mehr als einmal pro Eingriff genutzt. Die automatische Vorauswahl der Dokumente zur Reduzierung der Interaktionen zeigte sehr gute Ergebnisse. Da das behandelte Thema noch in den Anfängen steckt, wird in der Arbeit am Ende auf die Vielzahl von Möglichkeiten eingegangen, welche bezüglich neuartiger Darstellungsmethoden, Bedienvorrich-tungen und aktueller Standardisierungsaktivitäten noch realisiert werden können. Dadurch werden zukünftig auch die Abläufe in der Chirurgie beeinflusst werden.
377

Hand Gesture Recognition System

Gingir, Emrah 01 September 2010 (has links) (PDF)
This thesis study presents a hand gesture recognition system, which replaces input devices like keyboard and mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-colored glove or need lots of training data. The system mentioned in this study disables all these restrictions and provides an adaptive, effort free environment to the user. Study starts with an analysis of the different color space performances over skin color extraction. This analysis is independent of the working system and just performed to attain valuable information about the color spaces. Working system is based on two steps, namely hand detection and hand gesture recognition. In the hand detection process, normalized RGB color space skin locus is used to threshold the coarse skin pixels in the image. Then an adaptive skin locus, whose varying boundaries are estimated from coarse skin region pixels, segments the distinct skin color in the image for the current conditions. Since face has a distinct shape, face is detected among the connected group of skin pixels by using the shape analysis. Non-face connected group of skin pixels are determined as hands. Gesture of the hand is recognized by improved centroidal profile method, which is applied around the detected hand. A 3D flight war game, a boxing game and a media player, which are controlled remotely by just using static and dynamic hand gestures, were developed as human machine interface applications by using the theoretical background of this study. In the experiments, recorded videos were used to measure the performance of the system and a correct recognition rate of ~90% was acquired with nearly real time computation.
378

Enabling mobile microinteractions

Ashbrook, Daniel Lee 12 January 2010 (has links)
While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered. In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand. My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.
379

One Butterfly : understanding interface and interaction design for multitouch environments in museum contexts

Whitworth, Erin Casey 30 November 2010 (has links)
Museums can be perceived as stuffy and forbidding; web technologies can enable museums to expand access to their collections and counterbalance these perceptions. Museums are searching for new ways to communicate with the public to better make a case for their continued relevance in the digital information age. With the emergence of multitouch computing, other diverse forms of digital access and the popularization of the user experience, challenge museum design professionals to synthesize the information seeking experience that occurs on multiple computing platforms. As a means of addressing these issues, this Master’s Report summarizes the One Butterfly design project. The project's goal was to create a design for a multitouch interface for federated search of Smithsonian collections. This report describes the project’s three major phases. First, an idea for an interface was developed and designs based on that idea were captured and clarified. Second, a formal review of related research was undertaken to ground these designs in the museum informatics, user interface design, and multitouch interaction design literatures. Finally, the report concludes with a review and reflection on the designs and their underlying ideas in light of things learned in the previous phases. / text
380

Affective Gesture Fast-track Feedback Instant Messaging (AGFIM)

Adesemowo, Kayode January 2005 (has links)
<p>Text communication is often perceived as lacking some components of communication that are essential in sustaining interaction or conversation. This interaction incoherency tends to make&nbsp / text communication plastic. It is traditionally devoid of intonation, pitch, gesture, facial expression and visual or auditory cues. Nevertheless, Instant Messaging (IM), a form of text communication is on the upward uptake both on PCs and on mobile handhelds. There is a need to rubberise this plastic text messaging to improve co-presence for text communications thereby improving&nbsp / synchronous textual discussion, especially on handheld devices. One element of interaction is gesture, seen as a natural way of conversing. Attaining some level of interaction naturalism&nbsp / requires improving synchronous communication spontaneity, partly achieved by enhancing input mechanisms. To enhance input mechanisms for interactive text-based chat on mobile devices,&nbsp / there is a need to facilitate gesture input. Enhancement is achievable in a number of ways, such as input mechanism redesigning and input offering adaptation. This thesis explores affective gesture mode on interface redesign as an input offering adaptation. This is done without a major physical reconstruction of handheld devices. This thesis presents a text only IM system built on&nbsp / Session Initiation Protocol (SIP) and SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE). It was developed with a novel user-defined hotkey implemented as a one-click context menu to &ldquo / fast-track&rdquo / text-gestures and emoticons. A hybrid quantitative and qualitative approach was taken to enable data triangulation. Results from experimental trials show that an&nbsp / Affective Gesture (AG)approach improved IM chat spontaneity/response. Feedback from the user trials affirms that AG hotkey improves chat responsiveness, thus enhancing chat spontaneity.</p>

Page generated in 0.0685 seconds