• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

FunkyBot : Robotrörelser styrda i takt av ljudsignaler

Mörén, Siri, Nauwelaerts de Agé, Ebba January 2023 (has links)
In the following report, a study regarding the control of a dancing robot has been conducted. The study covers whether a robot’s movements can be controlled by a recurring and pacing audio signal at a given BPM, and how well it does so. In order to test this for results, five different BPMs have been played six times each in front of a sound sensor module connected to the robot’s microcontroller to perceive the signal. The reading of this signal then adjusts the speed at which the motors rotate. Two test subjects then pressed a button in the mobile app Soundbrenner each time they found the robot to change its movement. The presses of the button then generated a beat of their own which represents the viewer’s perceived BPM of the dancing robot. This is then used to compare the desired BPM with the BPM delivered by the robot. A maximum BPM for the robot to dance to has also been studied by gradually increasing the given BPM until it no longer could use the signal. The results showed that the robot could stick to the given BPM fairly well. The mean difference between the recorded and the desired beat was 4.9 BPM. At 100 and 140 BPM the sound sensor and microcontroller were most successful in reading the beat, while found it most challenging at 120. However, environmental factors such as noise pollution were a heavyweight cause for misreadings. The results where on the other hand near flawless when the signal was isolated and clear. The maximal beat the robot could follow was read to 239 BPM because of the delay used in the microcontroller’s code. / I följande projekt undersöks styrnnigen av en dansande robot och huruvida den kan, och isåfall hur väl, urföra sina rörelser i takt till ett uppspelat BPM, uppfattat av en ljudsensormodul. För att testa detta har fem olika takter spelats upp sex gånger var. När roboten börjat dansa, har två testpersoner fått klicka på en knapp i mobilappen Soundbrenner när de upplever att roboten byter rörelse. Dessa knapptryck har då alstrat en egen takt som i sin tur jämförs med den takt som faktiskt spelades upp. Även ett maximalt BPM som roboten kan dansa till har uppmäts genom en gradvis ökning tills den inte längre erhöll någon signal. Resultaten visade att roboten klarade av att hålla sig i takt någorlunda väl. Det medelvärde som erhölls för differensen mellan det uppmätta och den önskade takten var 4.9 BPM. Vid 100 och 140 BPM var det lättast för ljudsensorn och mikrokontrollern att göra korrekta mätningar, och svårat vid 120BPM. Dock var omgivningsfaktorer som buller en stor orsak till felläsningar. När signalen var tydlig och isolerad nog var resultaten mycket goda. Den högsta takt roboten kunde följa var 239 BPM på grund av den fördröjning som användes i mikrokontrollerns kod.
2

Audio classification with Neural Networks for IoT implementation

Khadoor, Nadim Kvernes January 2019 (has links)
This project is based upon two previous projects handed to the author by the Norwegian University of Science and Technology in co-operation with Disruptive Technologies.   The report discusses sound sensing and Neural Networks, and their application in IoT. The goal was to determine what type of Neural Networks or classification methods was most suited for audio classification. This was done by applying various classification methods and Neural Networks on a data set consisting of 8732 sound samples. These methods where logistic regression, Feed-Forward Neural Network, Convolutional Neural Network, Gated Recurrent Unit, and Long Short-term Memory network. To compare the Neural Networks the accuracy of the training data set and the validation data set were evaluated. Out of these methods the feed-forward network yielded the highest validation accuracy and is the preferable classification method. However, with more work and refinement the Long Short-term memory may prove to be the better solution.   Future work with a Vesper V1010 piezoelectric microphone and IoT implementation is discussed, as well as the social and ethical difficulties proposed by what is essentially a data gathering system.
3

<b>Machine Sound Recognition for Smart Monitoring</b>

Eunseob Kim (11791952) 17 April 2024 (has links)
<p dir="ltr">The onset of smart manufacturing signifies a crucial shift in the industrial landscape, underscoring the pressing need for systems capable of adapting to and managing the complex dynamics of modern production environments. In this context, the importance of smart monitoring becomes increasingly apparent, serving as a vital tool for ensuring operational efficiency and reliability. Inspired by the critical role of auditory perception in human decision-making, this study investigated the application of machine sound recognition for practical use in manufacturing environments. Addressing the challenge of utilizing machine sounds in the loud noises of factories, the study employed an Internal Sound Sensor (ISS).</p><p dir="ltr">The study examined how sound propagates through structures and further explored acoustic characteristics of the ISS, aiming to apply these findings in machine monitoring. To leverage the ISS effectively and achieve a higher level of monitoring, a smart sound monitoring framework was proposed to integrate sound monitoring with machine data and human-machine interface. Designed for applicability and cost effectiveness, this system employs real-time edge computing, making it adaptable for use in various industrial settings.</p><p dir="ltr">The proposed framework and ISS deployed across a diverse range of production environments, showcasing a leap forward in the integration of smart technologies in manufacturing. Their application extends beyond continuous manufacturing to include discrete manufacturing systems, demonstrating adaptability. By analyzing sound signals from various production equipment, this study delves into developing machine sound recognition models that predict operational states and productivity, aiming to enhance manufacturing efficiency and oversight on real factory floors. This comprehensive and practical approach underlines the framework's potential to revolutionize operational management and manufacturing productivity. The study progressed to integrating manufacturing context with sound data, advancing towards high-level monitoring for diagnostic predictions and digital twin. This approach confirmed sound recognition's role in manufacturing diagnostics, laying a foundation for future smart monitoring improvements.</p>

Page generated in 0.0583 seconds