• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 107
  • 42
  • 26
  • 23
  • 15
  • 12
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 501
  • 501
  • 415
  • 99
  • 92
  • 76
  • 72
  • 66
  • 54
  • 51
  • 44
  • 41
  • 38
  • 35
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

A microcontroller-based Electrochemical Impedance Spectroscopy Platform for Health Monitoring Systems

Bhatnagar, Purva 16 October 2015 (has links)
No description available.
412

REMOTE MICROPHONE SOUND-FIELD VIRTUAL SENSING METHOD USING NEURAL NETWORK FOR ACTIVE NOISE CONTROL SYSTEM

Juhyung Kim (20384604) 10 December 2024 (has links)
<p dir="ltr">Active noise control has been implemented in various applications as a highly flexible, customizable and adaptive lightweight noise control technique which also serves as effective complementary counterpart to passive noise control techniques (such as sound absorbing packages). As on-chip computing power advances, low-cost implementation of active noise control algorithms targeting at controlling noise in large spatial regions is made more possible than ever before, which also excited another wave of active research on this topic in recent years after the emerging and flourishing active noise control research era in the 1990's. To control larger space, the use of a multi-input and output (MIMO) system is necessary, since the controller needs to be designed based on the measured sound information in the targeted control region (these sensors are referred to error microphones). However, it is not practical to add a limitless number of error microphones to populate the whole control region, and it is sometimes not even possible to locate the error microphone directly at targeted locations when the system is in operation due to practical constraints (e.g., in a car cabin, it is not possible to place microphones in people's ears). Therefore, the virtual sensing technique have been to predict the sound at targeted locations from remote measurements. One of the challenges in virtual sensing is its performance robustness under a time-varying acoustic environment. The purpose of this work is mainly to use the time-varying acoustic environment introduced by a person's head motion as an example case study to explore the possibility of virtual sensing the sound at the person's two ears for different head positions based on acoustic data measured at a small-sized microphone array located behind the head without any auxiliary motion tracking devices. More specifically, it is to develop a machine learning based data-driven model that uses the cross-spectral matrix of sound signals measured at the remote microphones to predict the frequency response functions between remote microphone measurements and sound at ears (i.e., the virtual sensing frequency response functions) under different head positions. </p><p> </p><p dir="ltr">To get the data to train a neural network model, a measurement setup was suggested in the paper. A HATS dummy system that mimics the human hearing system with two microphones at the ear location was placed between the noise source and the reference microphone array composed of five microphones. Treating two ear microphone locations as the desired location of virtual sensors and microphone arrays as reference microphones, different measurements were taken by slightly changing the location and angle of the HATS dummy. A cross-spectrum density matrix was calculated with the measured data, and a frequency response matrix was calculated between the microphone array and the ear microphones, which would be used to make input data and target data for the neural network, respectively. With the cross-spectrum data, Dimension reduction was processed. A covariance matrix with the vectorized cross-spectrum density matrix was calculated, and power variation was evaluated to understand which frequency bands are sensitive to the change in the acoustic environment. In the hyperparameter choice, log-cosh was used for the loss function, LeakyRelu was used for the activation function, and Adam optimizer was selected. After comparing different learning rate strategies, a cosine decay with an initial learning rate of 0.003 was used for the learning rate setup. Frequency response with the target range from 51 Hz to 2000 Hz was estimated successfully with the listed neural networks setting with mean square error as 0.1205 and mean absolute error as 0.2025. Its error was compared with the standard deviation of the frequency response across the measurements. The error from the estimation was significantly lower than the standard deviation, which shows that the frequency response estimation using a neural network could increase the performance of active noise control with a virtual sensor even with the change in the acoustic environment.</p>
413

Active Control Of Noise Radiated From Personal Computers

Charpentier, Arnaud 19 November 2002 (has links)
As an indirect consequence of increased heat cooling requirements, personal computers (PC) have become noisier due to the increased use of fans. Hard disk drives also contribute to the annoying noise radiated by personal computers, creating a need for the control of computer noise. Due to size constraints, the implementation of passive noise control techniques in PC is difficult. Alternatively, active noise control (ANC) may provide a compact solution to the noise problems discussed above, which is the subject of this work. First, the computer noise sources were characterized. The structure-borne path was altered passively through the decoupling of the vibrating sources from the chassis. Global noise control strategy was then investigated with a hybrid passive/active noise control technique based on folded lined ducts, integrating microphones and speakers, that were added to the PC air inlet and outlet. While the ducts were effective above 1000Hz, the use of a MIMO adaptive feedforward digital controller lead to significant noise reduction at the ducts outlets below 1000Hz. However, global performance was limited due to important airborne flanking paths. Finally, the same type of controller was used to create a zone of quiet around the PC user head location. It was implemented using multimedia speakers and microphones, while the computer was placed in a semi-reverberant environment. A large zone of quiet surrounding the head was created at low frequencies (250Hz), and its size would reduce with increasing frequency (up to 1000Hz). / Master of Science
414

Hardware-Aided Privacy Protection and Cyber Defense for IoT

Zhang, Ruide 08 June 2020 (has links)
With recent advances in electronics and communication technologies, our daily lives are immersed in an environment of Internet-connected smart things. Despite the great convenience brought by the development of these technologies, privacy concerns and security issues are two topics that deserve more attention. On one hand, as smart things continue to grow in their abilities to sense the physical world and capabilities to send information out through the Internet, they have the potential to be used for surveillance of any individuals secretly. Nevertheless, people tend to adopt wearable devices without fully understanding what private information can be inferred and leaked through sensor data. On the other hand, security issues become even more serious and lethal with the world embracing the Internet of Things (IoT). Failures in computing systems are common, however, a failure now in IoT may harm people's lives. As demonstrated in both academic research and industrial practice, a software vulnerability hidden in a smart vehicle may lead to a remote attack that subverts a driver's control of the vehicle. Our approach to the aforementioned challenges starts by understanding privacy leakage in the IoT era and follows with adding defense layers to the IoT system with attackers gaining increasing capabilities. The first question we ask ourselves is "what new privacy concerns do IoT bring". We focus on discovering information leakage beyond people's common sense from even seemingly benign signals. We explore how much private information we can extract by designing information extraction systems. Through our research, we argue for stricter access control on newly coming sensors. After noticing the importance of data collected by IoT, we trace where sensitive data goes. In the IoT era, edge nodes are used to process sensitive data. However, a capable attacker may compromise edge nodes. Our second research focuses on applying trusted hardware to build trust in large-scale networks under this circumstance. The application of trusted hardware protects sensitive data from compromised edge nodes. Nonetheless, if an attacker becomes more powerful and embeds malicious logic into code for trusted hardware during the development phase, he still can secretly steal private data. In our third research, we design a static analyzer for detecting malicious logic hidden inside code for trusted hardware. Other than the privacy concern of data collected, another important aspect of IoT is that it affects the physical world. Our last piece of research work enables a user to verify the continuous execution state of an unmanned vehicle. This way, people can trust the integrity of the past and present state of the unmanned vehicle. / Doctor of Philosophy / The past few years have witnessed a rising in computing and networking technologies. Such advances enable the new paradigm, IoT, which brings great convenience to people's life. Large technology companies like Google, Apple, Amazon are creating smart devices such as smartwatch, smart home, drones, etc. Compared to the traditional internet, IoT can provide services beyond digital information by interacting with the physical world by its sensors and actuators. While the deployment of IoT brings value in various aspects of our society, the lucrative reward from cyber-crimes also increases in the upcoming IoT era. Two unique privacy and security concerns are emerging for IoT. On one hand, IoT brings a large volume of new sensors that are deployed ubiquitously and collect data 24/7. User's privacy is a big concern in this circumstance because collected sensor data may be used to infer a user's private activities. On the other hand, cyber-attacks now harm not only cyberspace but also the physical world. A failure in IoT devices could result in loss of human life. For example, a remotely hacked vehicle could shut down its engine on the highway regardless of the driver's operation. Our approach to emerging privacy and security concerns consists of two directions. The first direction targets at privacy protection. We first look at the privacy impact of upcoming ubiquitous sensing and argue for stricter access control on smart devices. Then, we follow the data flow of private data and propose solutions to protect private data from the networking and cloud computing infrastructure. The other direction aims at protecting the physical world. We propose an innovative method to verify the cyber state of IoT devices.
415

Development of Robust Correlation Algorithms for Image Velocimetry using Advanced Filtering

Eckstein, Adric 18 January 2008 (has links)
Digital Particle Image Velocimetry (DPIV) is a planar measurement technique to measure the velocity within a fluid by correlating the motion of flow tracers over a sequence of images recorded with a camera-laser system. Sophisticated digital processing algorithms are required to provide a high enough accuracy for quantitative DPIV results. This study explores the potential of a variety of cross-correlation filters to improve the accuracy and robustness of the DPIV estimation. These techniques incorporate the use of the Phase Transform (PHAT) Generalized Cross Correlation (GCC) filter applied to the image cross-correlation. The use of spatial windowing is subsequently examined and shown to be ideally suited for the use of phase correlation estimators, due to their invariance to the loss of correlation effects. The Robust Phase Correlation (RPC) estimator is introduced, with the coupled use of the phase correlation and spatial windowing. The RPC estimator additionally incorporates the use of a spectral filter designed from an analytical decomposition of the DPIV Signal-to-Noise Ratio (SNR). This estimator is validated in a variety of artificial image simulations, the JPIV standard image project, and experimental images, which indicate reductions in error on the order of 50% when correlating low SNR images. Two variations of the RPC estimator are also introduced, the Gaussian Transformed Phase Correlation (GTPC): designed to optimize the subpixel interpolation, and the Spectral Phase Correlation (SPC): estimates the image shift directly from the phase content of the correlation. While these estimators are designed for DPIV, the methodology described here provides a universal framework for digital signal correlation analysis, which could be extended to a variety of other systems. / Master of Science
416

Wavelet Packet Transform Modulation for Multiple Input Multiple Output Applications

Jones, Steven M.R., Noras, James M., Abd-Alhameed, Raed, Anoh, Kelvin O.O. January 2013 (has links)
No / An investigation into the wavelet packet transform (WPT) modulation scheme for Multiple Input Multiple Output (MIMO) band-limited systems is presented. The implementation involves using the WPT as the base multiplexing technology at baseband, instead of the traditional Fast Fourier Transform (FFT) common in Orthogonal Frequency Division Multiplexing (OFDM) systems. An investigation for a WPT-MIMO multicarrier system, using the Alamouti diversity technique, is presented. Results are consistent with those in the original Alamouti work. The scheme is then implemented for WPT-MIMO and FFTMIMO cases with extended receiver diversity, namely 2 ×Nr MIMO systems, where Nr is the number of receiver elements. It is found that the diversity gain decreases with increasing receiver diversity and that WPT-MIMO systems can be more advantageous than FFT-based MIMO-OFDM systems.
417

Implementation of Compressive Sampling for Wireless Sensor Network Applications

Ruprecht, Nathan Alexander 05 1900 (has links)
One of the challenges of utilizing higher frequencies in the RF spectrum, for any number of applications, is the hardware constraints of analog-to-digital converters (ADCs). Since mid-20th century, we have accepted the Nyquist-Shannon Sampling Theorem in that we need to sample a signal at twice the max frequency component in order to reconstruct it. Compressive Sampling (CS) offers a possible solution of sampling sub-Nyquist and reconstructing using convex programming techniques. There has been significant advancements in CS research and development (more notably since 2004), but still nothing to the advantage of everyday use. Not for lack of theoretical use and mathematical proof, but because of no implementation work. There has been little work on hardware in finding the realistic constraints of a working CS system used for digital signal process (DSP). Any parameters used in a system is usually assumed based on stochastic models, but not optimized towards a specific application. This thesis aims to address a minimal viable platform to implement compressive sensing if applied to a wireless sensor network (WSN), as well as address certain parameters of CS theory to be modified depending on the application.
418

Approches paramétriques pour le codage audio multicanal

Lapierre, Jimmy January 2007 (has links)
Résumé : Afin de répondre aux besoins de communication et de divertissement, il ne fait aucun doute que la parole et l’audio doivent être encodés sous forme numérique. En qualité CD, cela nécessite un débit numérique de 1411.2 kb/s pour un signal stéréo-phonique. Une telle quantité de données devient rapidement prohibitive pour le stockage de longues durées d’audio ou pour la transmission sur certains réseaux, particulièrement en temps réel (d’où l’adhésion universelle au format MP3). De plus, ces dernières années, la quantité de productions musicales et cinématographiques disponibles en cinq canaux et plus ne cesse d’augmenter. Afin de maintenir le débit numérique à un niveau acceptable pour une application donnée, il est donc naturel pour un codeur audio à bas débit d’exploiter la redondance entre les canaux et la psychoacoustique binaurale. Le codage perceptuel et plus particulièrement le codage paramétrique permet d’atteindre des débits manifestement inférieurs en exploitant les limites de l’audition humaine (étudiées en psychoacoustique). Cette recherche se concentre donc sur le codage paramétrique à bas débit de plus d’un canal audio. // Abstract : In order to fulfill our communications and entertainment needs, there is no doubt that speech and audio must be encoded in digital format. In"CD" quality, this requires a bit-rate of 1411.2 kb/s for a stereo signal. Such a large amount of data quickly becomes prohibitive for long-term storage of audio or for transmitting on some networks, especially in real-time (leading to a universal adhesion to the MP3 format). Moreover, throughout the course of these last years, the number of musical and cinematographic productions available in five channels or more continually increased.In order to maintain an acceptable bit-rate for any given application, it is obvious that a low bit-rate audio coder must exploit the redundancies between audio channels and binaural psychoacoustics. Perceptual audio coding, and more specifically parametric audio coding, offers the possibility of achieving much lower bit-rates by taking into account the limits of human hearing (psychoacoustics). Therefore, this research concentrates on parametric audio coding of more than one audio channel.
419

Amélioration de codecs audio standardisés avec maintien de l'interopérabilité

Lapierre, Jimmy January 2016 (has links)
Résumé : L’audio numérique s’est déployé de façon phénoménale au cours des dernières décennies, notamment grâce à l’établissement de standards internationaux. En revanche, l’imposition de normes introduit forcément une certaine rigidité qui peut constituer un frein à l’amélioration des technologies déjà déployées et pousser vers une multiplication de nouveaux standards. Cette thèse établit que les codecs existants peuvent être davantage valorisés en améliorant leur qualité ou leur débit, même à l’intérieur du cadre rigide posé par les standards établis. Trois volets sont étudiés, soit le rehaussement à l’encodeur, au décodeur et au niveau du train binaire. Dans tous les cas, la compatibilité est préservée avec les éléments existants. Ainsi, il est démontré que le signal audio peut être amélioré au décodeur sans transmettre de nouvelles informations, qu’un encodeur peut produire un signal amélioré sans ajout au décodeur et qu’un train binaire peut être mieux optimisé pour une nouvelle application. En particulier, cette thèse démontre que même un standard déployé depuis plusieurs décennies comme le G.711 a le potentiel d’être significativement amélioré à postériori, servant même de cœur à un nouveau standard de codage par couches qui devait préserver cette compatibilité. Ensuite, les travaux menés mettent en lumière que la qualité subjective et même objective d’un décodeur AAC (Advanced Audio Coding) peut être améliorée sans l’ajout d’information supplémentaire de la part de l’encodeur. Ces résultats ouvrent la voie à davantage de recherches sur les traitements qui exploitent une connaissance des limites des modèles de codage employés. Enfin, cette thèse établit que le train binaire à débit fixe de l’AMR WB+ (Extended Adaptive Multi-Rate Wideband) peut être compressé davantage pour le cas des applications à débit variable. Cela démontre qu’il est profitable d’adapter un codec au contexte dans lequel il est employé. / Abstract : Digital audio applications have grown exponentially during the last decades, in good part because of the establishment of international standards. However, imposing such norms necessarily introduces hurdles that can impede the improvement of technologies that have already been deployed, potentially leading to a proliferation of new standards. This thesis shows that existent coders can be better exploited by improving their quality or their bitrate, even within the rigid constraints posed by established standards. Three aspects are studied, being the enhancement of the encoder, the decoder and the bit stream. In every case, the compatibility with the other elements of the existent coder is maintained. Thus, it is shown that the audio signal can be improved at the decoder without transmitting new information, that an encoder can produce an improved signal without modifying its decoder, and that a bit stream can be optimized for a new application. In particular, this thesis shows that even a standard like G.711, which has been deployed for decades, has the potential to be significantly improved after the fact. This contribution has even served as the core for a new standard embedded coder that had to maintain that compatibility. It is also shown that the subjective and objective audio quality of the AAC (Advanced Audio Coding) decoder can be improved, without adding any extra information from the encoder, by better exploiting the knowledge of the coder model’s limitations. Finally, it is shown that the fixed rate bit stream of the AMR-WB+ (Extended Adaptive Multi-Rate Wideband) can be compressed more efficiently when considering a variable bit rate scenario, showing the need to adapt a coder to its use case.
420

Autonomous receivers for next-generation of high-speed optical communication networks

Isautier, Pierre Paul Roger 07 January 2016 (has links)
Advances in fiber optic communications and the convergence of the optical-wireless network will dramatically increase the network heterogeneity and complexity. The goal of our research is to create smart receivers that can autonomously identify and demodulate, without prior knowledge, nearly any signal emerging from the next-generation of high-speed optical communication networks.

Page generated in 0.0852 seconds