• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods

Aaronson, Neil L. January 2008 (has links)
Thesis (Ph. D.)--Michigan State University. Dept. of Physics, 2008. / Title from PDF t.p. (viewed on July 22, 2009) Includes bibliographical references (p. 236-243). Also issued in print.
12

An investigation into the use of intuitive control interfaces and distributed processing for enhanced three dimensional sound localization

Hedges, Mitchell Lawrence January 2016 (has links)
This thesis investigates the feasibility of using gestures as a means of control for localizing three dimensional (3D) sound sources in a distributed immersive audio system. A prototype system was implemented and tested which uses state of the art technology to achieve the stated goals. A Windows Kinect is used for gesture recognition which translates human gestures into control messages by the prototype system, which in turn performs actions based on the recognized gestures. The term distributed in the context of this system refers to the audio processing capacity. The prototype system partitions and allocates the processing load between a number of endpoints. The reallocated processing load consists of the mixing of audio samples according to a specification. The endpoints used in this research are XMOS AVB endpoints. The firmware on these endpoints were modified to include the audio mixing capability which was controlled by a state of the art audio distribution networking standard, Ethernet AVB. The hardware used for the implementation of the prototype system is relatively cost efficient in comparison to professional audio hardware, and is also commercially available for end users. the successful implementation and results from user testing of the prototype system demonstrates how it is a feasible option for recording the localization of a sound source. The ability to partition the processing provides a modular approach to building immersive sound systems. This removes the constraint of a centralized mixing console with a predetermined speaker configuration.
13

Ultra-Low-Power IoT Solutions for Sound Source Localization: Combining Mixed-Signal Processing and Machine Learning

de Godoy Peixoto, Daniel January 2019 (has links)
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of auditory stimuli that could provide important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. We start this research into building a wearable system that uses multichannel audio sensors embedded in a headset to help detect and locate cars from their honks and engine and tire noises. Based on this detection, the system can warn pedestrians of the imminent danger of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, front-end hardware that performs signal processing and feature extraction, and machine-learning-based classification on a smartphone, we are able to provide early danger detection in real time, from up to 80m distance, with greater than 80% precision and 90% recall, and alert the user on time (about 6s in advance for a car traveling at 30mph). The time delay between audio signals in a microphone array is the most important feature for sound-source localization. This work also presents a polarity-coincidence, adaptive time-delay estimation (PCC-ATDE) mixed-signal technique that uses 1-bit quantized signals and a negative-feedback architecture to directly determine the time delay between signals in the analog inputs and convert it to a digital number. This direct conversion, without a multibit ADC and further digital-signal processing, allows for ultra low power consumption. A prototype chip in 0:18μm CMOS with 4 analog inputs consumes 78nW with a 3-channel 8-bit digital time-delay output while sampling at 50kHz with a 20μs resolution and 6.06 ENOB. We present a theoretical analysis for the nonlinear, signal-dependent feedback loop of the PCC-ATDE. A delay-domain model of the system is developed to estimate the power bandwidth of the converter and predict its dynamic response. Results are validated with experiments using real-life stimuli, captured with a microphone array, that demonstrate the technique’s ability to localize a sound source. The chip is further integrated in an embedded platform and deployed as an audio-based vehicle-bearing IoT system. Finally, we investigate the signal’s envelope, an important feature for a host of applications enabled by machine-learning algorithms. Conventionally, the raw analog signal is digitized first, followed by feature extraction in the digital domain. This work presents an ultra-low-power envelope-to-digital converter (EDC) consisting of a passive switched-capacitor envelope detector and an inseparable successive approximation-register analog-to-digital converter (ADC). The two blocks integrate directly at different sampling rates without a buffer between them thanks to the ping-pong operation of their sampling capacitors. An EDC prototype was fabricated in 180nm CMOS. It provides 7.1 effective bits of ADC resolution and supports input signal bandwidth up to 5kHz and an envelope bandwidth up to 50Hz while consuming 9.6nW.
14

Advances in Autonomous-Underwater-Vehicle Based Passive Bottom-Loss Estimation by Processing of Marine Ambient Noise

Muzi, Lanfranco 02 December 2015 (has links)
Accurate modeling of acoustic propagation in the ocean waveguide is important to SONAR-performance prediction, and requires, particularly in shallow water environments, characterizing the bottom reflection loss with a precision that databank-based modeling cannot achieve. Recent advances in the technology of autonomous underwater vehicles (AUV) make it possible to envision a survey system for seabed characterization composed of a short array mounted on a small AUV. The bottom power reflection coefficient (and the related reflection loss) can be estimated passively by beamforming the naturally occurring marine ambient-noise acoustic field recorded by a vertical line array of hydrophones. However, the reduced array lengths required by small AUV deployment can hinder the process, due to the inherently poor angular resolution. In this dissertation, original data-processing techniques are presented which, by introducing into the processing chain knowledge derived from physics, can improve the performance of short arrays in this particular task. Particularly, the analysis of a model of the ambient-noise spatial coherence function leads to the development of a new proof of the result at the basis of the bottom reflection-loss estimation technique. The proof highlights some shortcomings inherent in the beamforming operation so far used in this technique. A different algorithm is then proposed, which removes the problem achieving improved performance. Furthermore, another technique is presented that uses data from higher frequencies to estimate the noise spatial coherence function at a lower frequency, for sensor spacing values beyond the physical length of the array. By "synthesizing" a longer array, the angular resolution of the bottom-loss estimate can be improved, often making use of data at frequencies above the array design frequency, otherwise not utilized for beamforming. The proposed algorithms are demonstrated both in simulation and on real data acquired during several experimental campaigns.
15

Acoustic Localization Employing Polar Directivity Patterns of Bidirectional Microphones Enabling Minimum Aperture Microphone Arrays

Varada, Vijay K. January 2010 (has links)
No description available.
16

Acoustic source localization in 3D complex urban environments

Choi, Bumsuk 05 June 2012 (has links)
The detection and localization of important acoustic events in a complex urban environment, such as gunfire and explosions, is critical to providing effective surveillance of military and civilian areas and installations. In a complex environment, obstacles such as terrain or buildings introduce multipath propagations, reflections, and diffractions which make source localization challenging. This dissertation focuses on the problem of source localization in three-dimensional (3D) realistic urban environments. Two different localization techniques are developed to solve this problem: a) Beamforming using a few microphone phased arrays in conjunction with a high fidelity model and b) Fingerprinting using many dispersed microphones in conjunction with a low fidelity model of the environment. For an effective source localization technique using microphone phased arrays, several candidate beamformers are investigated using 2D and corresponding 3D numerical models. Among them, the most promising beamformers are chosen for further investigation using 3D large models. For realistic validation, localization error of the beamformers is analyzed for different levels of uncorrelated noise in the environment. Multiple-array processing is also considered to improve the overall localization performance. The sensitivity of the beamformers to uncertainties that cannot be easily accounted for (e.g. temperature gradient and unmodeled object) is then investigated. It is observed that evaluation in 3D models is critical to assess correctly the potential of the localization technique. The enhanced minimum variance distortionless response (EMVDR) is identified to be the only beamformer that has super-directivity property (i.e. accurate localization capability) and still robust to uncorrelated noise in the environment. It is also demonstrated that the detrimental effect of uncertainties in the modeling of the environment can be alleviated by incoherent multiple arrays. For efficient source localization technique using dispersed microphones in the environment, acoustic fingerprinting in conjunction with a diffused-based energy model is developed as an alternative to the beamforming technique. This approach is much simpler requiring only microphones rather than arrays. Moreover, it does not require an accurate modeling of the acoustic environment. The approach is validated using the 3D large models. The relationship between the localization accuracy and the number of dispersed microphones is investigated. The effect of the accuracy of the model is also addressed. The results show a progressive improvement in the source localization capabilities as the number of microphones increases. Moreover, it is shown that the fingerprints do not need to be very accurate for successful localization if enough microphones are dispersed in the environment. / Ph. D.
17

Head Mounted Microphone Arrays

Gillett, Philip Winslow 25 September 2009 (has links)
Microphone arrays are becoming increasingly integrated into every facet of life. From sonar to gunshot detection systems to hearing aids, the performance of each system is enhanced when multi-sensor processing is implemented in lieu of single sensor processing. Head mounted microphone arrays have a broad spectrum of uses that follow the rigorous demands of human hearing. From noise cancellation to focused listening, from localization to classification of sound sources, any and all attributes of human hearing may be augmented through the use of microphone arrays and signal processing algorithms. Placing a set of headphones on a human provides several desirable features such as hearing protection, control over the acoustic environment (via headphone speakers), and a means of communication. The shortcoming of headphones is the complete occlusion of the pinnae (the ears), disrupting auditory cues utilized by humans for sound localization. This thesis presents the underlying theory in designing microphone arrays placed on diffracting bodies, specifically the human head. A progression from simple to complex geometries chronicles the effect of diffracting structures on array manifold matrices. Experimental results validate theoretical and computational models showing that arrays mounted on diffracting structures provide better beamforming and localization performance than arrays mounted in the free field. Data independent, statistically optimal, and adaptive beamforming methods are presented to cover a broad range of goals present in array applications. A framework is developed to determine the performance potential of microphone array designs regardless of geometric complexity. Directivity index, white noise gain, and singular value decomposition are all utilized as performance metrics for array comparisons. The biological basis for human hearing is presented as a fundamental attribute of headset array optimization methods. A method for optimizing microphone locations for the purpose of the recreation of HRTFs is presented, allowing transparent hearing (also called natural hearing restoration) to be performed. Results of psychoacoustic testing with a prototype headset array are presented and examined. Subjective testing shows statistically significant improvements over occluded localization when equipped with this new transparent hearing system prototype. / Ph. D.
18

Multiarray Passive Acoustic Localization and Tracking

Mennitt, Daniel James 11 December 2008 (has links)
Wireless sensor networks and data fusion has received increasing attention in recent years, due to the ever increasing computational power, battery and wireless technology, and proliferation of sensor modalities. Notably, the application of acoustic sensors and arrays of sensors has expanded to encompass surveillance, teleconferencing, and sound source localization in adverse environments. The ability to passively locate and track acoustic sources, be they gunfire, animals, or geological events, is crucial to a wide range of applications. The challenge addressed herein is how to best utilize the massive amount of data collected from spatially distributed sensors. Localization in two acoustic propagation scenarios is addressed: the free-field assumption and the general case. In both cases, it is found that performance is highly dependent on the array-source geometry which in turn drives the design of localization strategies. First, the general surveillance problem including signal detection, classification, data association, localization and tracking is studied. Signal detectors are designed with a focus on robustness and capacity for real time implementation. Specifics of the data association problem relevant to acoustic measurements are addressed. Assuming free-field propagation, a localization algorithm is developed to harness some of the vast potential and robust nature of a sensor networks. In addition, a prototypical sensor network has been constructed to accompany the theoretical development, address real world situations, and demonstrate applicability. Experimental results obtained confirm the practicality of theoretical models, support numerical results, and illustrate the effectiveness of the proposed strategies and the system as a whole. In many situations of interest, obstacles to wave propagation such as terrain or buildings exist that provide unique challenges to localization. These obstacles introduce multiple paths, diffraction, and scattering into the propagation. The second part of this dissertation investigates localization in the general propagation scenario of a multi-wave, semi-reverberant environment characteristic of urban areas. Matched field processing is introduced as a feasible method and found to offer superior performance and flexibility over time reversal techniques. The effects of uncertainty in model parameters are studied in an urban setting. Multiarray processing methods are developed and strategies to mitigate the effects of model mismatch are established. / Ph. D.
19

Development of an ATV-Based Remote-Operated Sensor Platform

Sumner, Mark David 25 May 2010 (has links)
Urban warfare is unfortunate reality of the modern world and that fact is unlikely to change in the near future. One significant danger to soldiers in an urban setting is posed by concealed snipers. The large amount of cover among densely packed buildings make snipers hard to detect by sight or sound. When a sniper fires at troops, it is imperative to positively locate the sniper as soon as possible to ensure the safety of soldiers in the field. One method of sniper detection is the use of distributed sensor nodes. These nodes may be stationary, mounted on a soldier or mounted on a vehicle. These nodes may accommodate many types of sensors, including microphones and cameras, both conventional and infrared. This project specifically deals with microphone arrays and conventional cameras mounted on a remote-operated vehicle. The purpose of this project is to demonstrate that mobile sensor platforms can be used alone or in groups to locate the source of gunshots as well as other sources of noise. The vehicle described is a recreational ATV. It has been outfitted with mechanical actuators and electronic control modules to allow the vehicle to be operated remotely. The selection and installation of these components is detailed. This includes the control of the ATV's steering, brakes, throttle and engine starter. The system also includes a failsafe circuit to ensure that the system will shut down if positive control is lost. An array of sensors and transducers was added to the vehicle to allow for useful data collection. This includes the aforementioned microphone array and camera. Other sensors mounted on the vehicle include a GPS antenna and an electronic compass for establishing the position and orientation of the vehicle and an accelerometer to sample engine vibration and allow for cancellation of engine noise. Once assembled, this vehicle was tested in laboratory and field environments to demonstrate its effectiveness as a mobile sensor platform. The tests showed that a microphone array could be used in combination with a camera to provide a continuous stream of images of a moving target. The test also demonstrated how a mobile acoustic node can relocate to triangulate the location of an acoustic source and thereby replicate a larger stationary network. Overall, these tests demonstrated that such a system is a feasible platform for urban combat use. Full implementation would require the fusion of several separate features, the addition of a few new features, such as semi-autonomous operation, and further field testing. / Master of Science
20

Long Basline Ranging Acoustic Positioning System

Gode, Tejaswi 30 April 2015 (has links)
A long-baseline (LBL) underwater acoustic communication and localization system was developed for the Virginia Tech Underwater Glider (VTUG). Autonomous underwater vehicles, much like terrestrial and aerial robots require an effective positioning system, like GPS to perform a wide variety of guidance, navigation and control operations. Sea and freshwater attenuate electromagnetic waves (sea water is worse due to higher conductivity) within very few meters of striking the water surface. Since radio frequency communications are unavailable, many undersea systems use acoustic communications instead. Underwater acoustic communication is the technique of sending and receiving data below water. Underwater acoustic positioning is the technique of locating an underwater object. Among the various types of acoustic positioning systems, the LBL acoustic positioning method offers the highest accuracy for underwater vehicle navigation. A system consisting of three acoustic 'beacons which are placed on the surface of the water at known locations was developed. Using an acoustic modem to excite an acoustic transducer to send sound waves from an underwater glider, the range measurements to each of the beacons was calculated. These range measurements along with data from the attitude heading and reference system (AHRS) on board the glider were used to estimate the position of the underwater vehicle. Static and dynamic estimators were implemented. The system also allowed for underwater acoustic communication in the form of heartbeat messages from the glider, which were used to monitor the health of the vehicle. / Master of Science

Page generated in 0.1376 seconds