• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 139
  • 104
  • 34
  • 16
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 676
  • 134
  • 124
  • 113
  • 101
  • 98
  • 82
  • 75
  • 71
  • 70
  • 62
  • 57
  • 46
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

CHILDREN’S THEORY OF MIND, JOINT ATTENTION, AND VIDEO CHAT

Curry, Ryan H. 21 June 2021 (has links)
No description available.
332

Gesture recognition with application in music arrangement

Pun, James Chi-Him 05 November 2007 (has links)
This thesis studies the interaction with music synthesis systems using hand gestures. Traditionally users of such systems were limited to input devices such as buttons, pedals, faders, and joysticks. The use of gestures allows the user to interact with the system in a more intuitive way. Without the constraint of input devices, the user can simultaneously control more elements within the music composition, thus increasing the level of the system's responsiveness to the musician's creative thoughts. A working system of this concept is implemented, employing computer vision and machine intelligence techniques to recognise the user's gestures. / Dissertation (MSc)--University of Pretoria, 2006. / Computer Science / MSc / unrestricted
333

Synaesthesia and Visual Music in Swedish Silent Film

Ribbing Lygon, Gustav January 2023 (has links)
This thesis examines how visual music in Swedish silent film served as an allusion to sound, either as a cinematic effect or as an intertwined part of the film narrative. Drawing on the concepts of synaesthesia and visual music, a discussion on how both concepts relate to each other serves as the key method for analyzing a selection of films. By defining synaesthesia in relation to concepts such as Gesamtkunstwerk and photogénie, the analysis examines different synaesthetic expressions through visual music in Swedish silent film. This thesis argues that Swedish film was influenced through several forms of art and ideas from different cinematic cultures. By comparing synaesthetic expressions through visual music in Swedish silent film and other cinematic cultures, this thesis suggest that the concepts were used in different ways to define cinema as a unique form of art.
334

R-CNN and Wavelet Feature Extraction for Hand Gesture Recognition With Emg Signals

Shanmuganathan, Vimal, Yesudhas, Harold Robinson, Khan, Mohammad S., Khari, Manju, Gandomi, Amir H. 01 November 2020 (has links)
This paper demonstrates the implementation of R-CNN in terms of electromyography-related signals to recognize hand gestures. The signal acquisition is implemented using electrodes situated on the forearm, and the biomedical signals are generated to perform the signals preprocessing using wavelet packet transform to perform the feature extraction. The R-CNN methodology is used to map the specific features that are acquired from the wavelet power spectrum to validate and train how the architecture is framed. Additionally, the real-time test is completed to reach the accuracy of 96.48% compared to the related methods. This kind of result proves that the proposed work has the highest amount of accuracy in recognizing the gestures.
335

Gesturing at Encoding Enhances Episodic Memory Recall for Older Adults.

Simhairi, Voula Sadie January 2021 (has links)
Gestures have been shown to enhance memory recall for children and adults, but little research has investigated the benefits of gesturing to recall in older adult populations. While theory suggests that older adults may be less embodied, that their cognitive and perceptual processes may be less grounded in their sensorimotor capacities, the literature is unclear on whether or not gesturing is still associated with memory in this population. To test the effect of gesturing on recall we compare 58 younger (20-29 yrs) and 62 older (60-85yrs) adults’ performance on an episodic memory recall task (immediately, and at a 3-week delay) after randomly assigning participants to two conditions (instructed gesture or free gesture). In the free gesture condition participants were allowed to freely gesture while describing 26 3-second-long vignettes. Participants in the instructed gesture condition were additionally asked to provide meaningful gestures while providing descriptions to vignettes. Analyzing observational data from the free gesture conditions, we found that both immediately and at a delay, younger and older adults recalled more of the vignettes that they had spontaneously gestured for than those that they had not gestured for. When looking at the effects of instructing gesture, we found that asking older adults to gesture increased their overall recall of vignettes at a delay when compared to older adults left to freely gesture. The same increase to recall was not found for younger adults. These findings suggest that spontaneous gesturing at encoding is just as significant to episodic memory recall for older adults as it is for younger adults, and that asking older adults to gesture may additionally benefit episodic memory for older adults.
336

Hand Gesture Recognition Using Ultrasonic Waves

AlSharif, Mohammed H. 04 1900 (has links)
Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.
337

Sound-gesture und ihre Mediatisierungen: Musikalische als symbolische Formen von embodied cognitions aus der Natur sonisch performativen Erlebens

Jauk, Werner 16 August 2022 (has links)
No description available.
338

Passive gesture recognition on unmodified smartphones using Wi-Fi RSSI / Passiv gest-igenkänning för en standardutrustad smartphone med hjälpav Wi-Fi RSSI

Abdulaziz Ali Haseeb, Mohamed January 2017 (has links)
The smartphone is becoming a common device carried by hundreds of millions of individual humans worldwide, and is used to accomplish a multitude of different tasks like basic communication, internet browsing, online shopping and fitness tracking. Limited by its small size and tight energy storage, the human-smartphone interface is largely bound to the smartphones small screens and simple keypads. This prohibits introducing new rich ways of interaction with smartphones.   The industry and research community are working extensively to find ways to enrich the human-smartphone interface by either seizing the existing smartphones resources like microphones, cameras and inertia sensors, or by introducing new specialized sensing capabilities into the smartphones like compact gesture sensing radar devices.   The prevalence of Radio Frequency (RF) signals and their limited power needs, led us towards investigating using RF signals received by smartphones to recognize gestures and activities around smartphones. This thesis introduces a solution for recognizing touch-less dynamic hand gestures from the Wi-Fi Received Signal Strength (RSS) received by the smartphone using a recurrent neural network (RNN) based probabilistic model. Unlike other Wi-Fi based gesture recognition solutions, the one introduced in this thesis does not require a change to the smartphone hardware or operating system, and performs the hand gesture recognition without interfering with the normal operation of other smartphone applications.   The developed hand gesture recognition solution achieved a mean accuracy of 78% detecting and classifying three hand gestures in an online setting involving different spatial and traffic scenarios between the smartphone and Wi-Fi access points (AP). Furthermore the characteristics of the developed solution were studied, and a set of improvements have been suggested for further future work. / Smarta telefoner bärs idag av hundratals miljoner människor runt om i världen, och används för att utföra en mängd olika uppgifter, så som grundläggande kommunikation, internetsökning och online-inköp. På grund av begränsningar i storlek och energilagring är människa-telefon-gränssnitten dock i hög grad begränsade till de förhållandevis små skärmarna och enkla knappsatser.   Industrin och forskarsamhället arbetar för att hitta vägar för att förbättra och bredda gränssnitten genom att antingen använda befintliga resurser såsom mikrofoner, kameror och tröghetssensorer, eller genom att införa nya specialiserade sensorer i telefonerna, som t.ex. kompakta radarenheter för gestigenkänning.   Det begränsade strömbehovet hos radiofrekvenssignaler (RF) inspirerade oss till att undersöka om dessa kunde användas för att känna igen gester och aktiviteter i närheten av telefoner. Denna rapport presenterar en lösning för att känna igen gester med hjälp av ett s.k. recurrent neural network (RNN). Till skillnad från andra Wi-Fi-baserade lösningar kräver denna lösning inte en förändring av vare sig hårvara eller operativsystem, och ingenkänningen genomförs utan att inverka på den normala driften av andra applikationer på telefonen.   Den utvecklade lösningen når en genomsnittlig noggranhet på 78% för detektering och klassificering av tre olika handgester, i ett antal olika konfigurationer vad gäller telefon och Wi-Fi-sändare. Rapporten innehåller även en analys av flera olika egenskaper hos den föreslagna lösningen, samt förslag till vidare arbete.
339

Make people move : Utilizing smartphone motion sensors to capture physical activity within audiences during lectures / Rör på er! : Användning av rörelsesensorer i smartphones för att skapa fysisk aktivitet i en föreläsningspublik

Eklund, Frida January 2018 (has links)
It takes only about 10-30 minutes into a sedentary lecture before audience attention is decreasing. There are different ways to avoid this. One is to use a web-based audience response systems (ARS), where the audience interact with the lecturer through their smartphones, and another is to take short breaks, including physical movements, to re-energize both the body and the brain. In this study, these two methods have been combined and explored. By utilizing the motion sensors that are integrated in almost every smartphone, a physical activity for a lecture audience was created and implemented in the ARS platform Mentimeter. The proof of concept was evaluated in two lectures, based on O’Brien and Toms' model of engagement. The aim was to explore the prerequisites, both in terms of design and implementation, for creating an engaging physical activity within a lecture audience, using smartphone motion sensors to capture movements and a web-based ARS to present the data. The results showed that the proof of concept was perceived as fun and engaging, where important factors for creating engagement were found to be competition and a balanced level of task difficulty. The study showed that feedback is complicated when it comes to motion gesture interactions, and that there are limitations as to what can be done with smartphone motion sensors using web technologies. There is great potential for further research in how to design an energizing lecture activity using smartphones, as well as in exploring the area of feedback in motion gesture interaction. / Efter bara 10-30 minuter på en stillasittande föreläsning börjar publiken tappa i koncentration. Det går undvika på olika sätt. Ett sätt kan vara genom att låta publiken bli mer aktiva i föreläsningen med hjälp av ett webb-baserat röstningsverktyg, där de använder sina smartphones för att interagera med föreläsaren, och ett annat sätt kan vara att ta korta pauser där publiken får röra på sig för att syresätta hjärna och kropp. I den här studien kombinerades dessa två metoder genom att utnyttja rörelsesensorerna som finns inbyggda i de flesta smartphones. En fysisk aktivitet för en föreläsningspublik togs fram och implementerades i ARS-plattformen Mentimeter och konceptet utvärderades sedan under två föreläsningar baserat på O’Brien and Toms' modell för engagemang. Målet var att utforska förutsättningarna, både inom teknik och design, för att skapa en engagerande fysisk aktivitet för en föreläsningspublik, där smartphonens rörelsesensorer används för att fånga rörelse och ett webb-baserat röstningssystem för att presentera data. Resultatet visade att konceptet upplevdes som kul och engagerande, där viktiga faktorer för att skapa engagemang fanns i att ha ett tävlingsmoment och en lagom svårighetsgrad. Studien visade även att feedback är komplicerat när det kommer till rörelseinteraktion, och att det finns begräsningarna i vad som kan göras med rörelsesensorerna i en smartphone med hjälp av webbteknologi. Det finns stor potential för ytterligare undersökningar både inom hur man kan skapa interaktiva aktiviteter på föreläsningar som ger publiken mer energi, men också inom området kring feedback för rörelseinteraktion.
340

The effect of breed selection on interpreting human directed cues in the domestic dog

Winnerhall, Louise January 2014 (has links)
During the course of time, artificial selection has given rise to a great diversity among today's dogs. Humans and dogs have evolved side by side and dogs have come to understand human body language relatively well. This study investigates whether selection pressure and domestication could reveal differences in dogs’ skill to interpret human directional cues, such as distal pointing. In this study, 46 pet dogs were tested from 27 breeds and 6 crossbreeds for performance in the two-way object choice task. Breeds that are selected to work with eye contact of humans were compared with breeds that are selected to work more independently. Dogs of different skull shape were also compared, as well as age, sex and previous training on similar tasks. No significant differences in performance were found between dogs of various age, sex or skull shape. There was a tendency for significant difference in performance if the dog had been previously trained on similar tasks. When dogs that made 100% one-sided choices were excluded, a tendency appeared for there to be a difference between the cooperative worker breeds compared to the other breeds for the time it took for dogs to make a choice. There is a correlation between the number of correct choices made and the latency for the dogs from being release to making a choice (choice latency). All groups of dogs, regardless of my categorization, performed above chance level, showing that dogs have a general ability to follow, and understand, human distal pointing.

Page generated in 0.4042 seconds