• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 17
  • 8
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 117
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Death rattle : an exploration

Wee, Bee Leng January 2003 (has links)
No description available.
2

La Médiathèque de Noisy-le-Grand rapport de stage /

Bourguignat, Christelle. Presse, Claire. January 2003 (has links) (PDF)
Rapport de stage diplôme de conservateur des bibliothèques : Bibliothéconomie : Villeurbanne, ENSSIB : 2003.
3

Vibraphone transcription from noisy audio using factorization methods

Zehtabi, Sonmaz 30 April 2012 (has links)
This thesis presents a comparison between two factorization techniques { Probabilistic Latent Component Analysis (PLCA) and Non-Negative Least Squares (NNLSQ) { for the problem of detecting note events played by a vibraphone, using a microphone for sound acquisition in the context of live performance. Ambient noise is reduced by using specifi c dictionary codewords to model the noise. The results of the factorization are analyzed by two causal onset detection algorithms: a rule-based algorithm and a trained machine learning based classi fier. These onset detection algorithms yield decisions on when note events happen. Comparative results are presented, considering a database of vibraphone recordings with di fferent levels of noise, showing the conditions under which the event detection is reliable. / Graduate
4

Perceptual Binaural Speech Enhancement in Noisy Enviornments

Dong, Rong 02 1900 (has links)
Speech enhancement in multi-speaker babble remains an enormous challenge. In this study, we developed a binaural speech enhancement system to extract information pertaining to a target speech signal embedded in a noisy background for use in future hearing-aid systems. The principle underlying the proposed system is to simulate the perceptual auditory segregation process carried out in the normal human auditory system. Based on the spatial location, pitch and onset cues, the system can identify and enhance those time-frequency regions which constitute the target speech. The proposed system is capable of dealing with a wide variety of noise intrusions, including competing speech signals and multi-speaker babble. It also works under mild reverberation conditions. Systematic evaluation shows that the system achieves substantial improvement on the intelligibility of target signal, while it largely suppresses the unwanted background signal. / Thesis / Master of Applied Science (MASc)
5

Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic Reasoning

Li, Wei January 2010 (has links)
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent two core problems in AI. Backtracking based algorithms have been applied in both problems. In this thesis, I investigate structure-based techniques for solving real world SAT and Bayesian networks, such as software testing and medical diagnosis instances. When solving a SAT instance using backtracking search, a sequence of decisions must be made as to which variable to branch on or instantiate next. Real world problems are often amenable to a divide-and-conquer strategy where the original instance is decomposed into independent sub-problems. Existing decomposition techniques are based on pre-processing the static structure of the original problem. I propose a dynamic decomposition method based on hypergraph separators. Integrating this dynamic separator decomposition into the variable ordering of a modern SAT solver leads to speedups on large real world SAT problems. Encoding a Bayesian network into a CNF formula and then performing weighted model counting is an effective method for exact probabilistic inference. I present two encodings for improving this approach with noisy-OR and noisy-MAX relations. In our experiments, our new encodings are more space efficient and can speed up the previous best approaches over two orders of magnitude. The ability to solve similar problems incrementally is critical for many probabilistic reasoning problems. My aim is to exploit the similarity of these instances by forwarding structural knowledge learned during the analysis of one instance to the next instance in the sequence. I propose dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the evidence, parameter, and structure of the network change. Experimental results show that my approach yields significant improvements over previous model counting approaches on multiple challenging Bayesian network instances.
6

Exploiting Structure in Backtracking Algorithms for Propositional and Probabilistic Reasoning

Li, Wei January 2010 (has links)
Boolean propositional satisfiability (SAT) and probabilistic reasoning represent two core problems in AI. Backtracking based algorithms have been applied in both problems. In this thesis, I investigate structure-based techniques for solving real world SAT and Bayesian networks, such as software testing and medical diagnosis instances. When solving a SAT instance using backtracking search, a sequence of decisions must be made as to which variable to branch on or instantiate next. Real world problems are often amenable to a divide-and-conquer strategy where the original instance is decomposed into independent sub-problems. Existing decomposition techniques are based on pre-processing the static structure of the original problem. I propose a dynamic decomposition method based on hypergraph separators. Integrating this dynamic separator decomposition into the variable ordering of a modern SAT solver leads to speedups on large real world SAT problems. Encoding a Bayesian network into a CNF formula and then performing weighted model counting is an effective method for exact probabilistic inference. I present two encodings for improving this approach with noisy-OR and noisy-MAX relations. In our experiments, our new encodings are more space efficient and can speed up the previous best approaches over two orders of magnitude. The ability to solve similar problems incrementally is critical for many probabilistic reasoning problems. My aim is to exploit the similarity of these instances by forwarding structural knowledge learned during the analysis of one instance to the next instance in the sequence. I propose dynamic model counting and extend the dynamic decomposition and caching technique to multiple runs on a series of problems with similar structure. This allows us to perform Bayesian inference incrementally as the evidence, parameter, and structure of the network change. Experimental results show that my approach yields significant improvements over previous model counting approaches on multiple challenging Bayesian network instances.
7

Robust speech features for speech recognition in hostile environments

Toh, Aik January 1900 (has links)
Speech recognition systems have improved in robustness in recent years with respect to both speaker and acoustical variability. Nevertheless, it is still a challenge to deploy speech recognition systems in real-world applications that are exposed to diverse and significant level of noise. Robustness and recognition accuracy are the essential criteria in determining the extent of a speech recognition system deployed in real-world applications. This work involves development of techniques and extensions to extract robust features from speech and achieve substantial performance in speech recognition. Robustness and recognition accuracy are the top concern in this research. In this work, the robustness issue is approached using the front-end processing, in particular robust feature extraction. The author proposes an unified framework for robust feature and presents a comprehensive evaluation on robustness in speech features. The framework addresses three distinct approaches: robust feature extraction, temporal information inclusion and normalization strategies. The author discusses the issue of robust feature selection primarily in the spectral and cepstral context. Several enhancement and extensions are explored for the purpose of robustness. This includes a computationally efficient approach proposed for moment normalization. In addition, a simple back-end approach is incorporated to improve recognition performance in reverberant environments. Speech features in this work are evaluated in three distinct environments that occur in real-world scenarios. The thesis also discusses the effect of noise on speech features and their parameters. The author has established that statistical properties play an important role in mismatches. The significance of the research is strengthened by the evaluation of robust approaches in more than one scenario and the comparison with the performance of the state-of-the-art features. The contributions and limitations of each robust feature in all three different environments are highlighted. The novelty of the work lies in the diverse hostile environments which speech features are evaluated for robustness. The author has obtained recognition accuracy of more than 98.5% for channel distortion. Recognition accuracy greater than 90.0% has also been maintained for reverberation time 0.4s and additive babble noise at SNR 10dB. The thesis delivers a comprehensive research on robust speech features for speech recognition in hostile environments supported by significant experimental results. Several observations, recommendations and relevant issues associated with robust speech features are presented.
8

Novel opposition-based sampling methods for efficiently solving challenging optimization problems

Esmailzadeh, Ali 01 April 2011 (has links)
In solving noise-free and noisy optimization problems, candidate initialization and sampling play a key role, but are not deeply investigated. It is of interest to know if the entire search space has the same quality for candidate-solutions during solving different type of optimization problems. In this thesis, a comprehensive investigation is conducted in order to clear those doubts, and to examine the effects of variant sampling methods on solving challenging optimization problems, such as large-scale, noisy, and multi-modal problems. As a result, the search space is segmented by using seven segmentation schemes, namely: Center-Point, Center-Based, Modula-Opposite, Quasi-Opposite, Quasi-Reflection, Supper- Opposite, and Opposite-Random. The introduced schemes are studied using Monte-Carlo simulation, on various types of noise-free optimization problems, and ultimately ranked based on their performance in terms of probability of closeness, average distance to unknown solution, number of solutions found, and diversity. Based on the results of the experiments, high-ranked schemes are selected and utilized on well-known metaheuristic algorithms, as case studies. Two categories of case studies are targeted; one for a singlesolution- based metaheuristic (S-metaheuristic) and another one for a population based metaheuristic (P-metaheuristic). A high-ranked single-solution-based scheme is utilized to accelerate Simulated Annealing (SA) algorithm, as a noise-free S-metaheuristic case study. Similarly, for noise-free P-metaheuristic case study, an effective population-based algorithm, Differential Evolution (DE), has been utilized. The experiments confirm that the new algorithms outperform the parent algorithm (DE) on large-scale problems. In the same direction, with regards to solving noisy problems more efficiently, a Shaking-based sampling method is introduced, in which the original noise is tackled by adding an additional noise into the search process. As a case study, the Shaking-based sampling is utilized on the DE algorithm, from which two variant algorithms have been developed and showed impressive performance in comparison to the classical DE, in tackling noisy largescale problems. This thesis has created an opportunity for a comprehensive investigation on search space segmentation schemes and proposed new sampling methods. The current study has provided a guide to use appropriate sampling schemes for a given types of problems such as noisy, large-scale and multi-modal optimization problems. Furthermore, this thesis questions the effectiveness of uniform-random sampling method, which is widely used in of S-Metaheuristic and P-Metaheuristic algorithms. / UOIT
9

Multiple Antenna Broadcast Channels with Random Channel Side Information

Shalev Housfater, Alon 11 January 2012 (has links)
The performance of multiple input single output (MISO) broadcast channels is strongly dependent on the availability of channel side information (CSI) at the transmitter. In many practical systems, CSI may be available to the transmitter only in a corrupted and incomplete form. It is natural to assume that the flaws in the CSI are random and can be represented by a probability distribution over the channel. This work is concerned with two key issues concerning MISO broadcast systems with random CSI: performance analysis and system design. First, the impact of noisy channel information on system performance is investigated. A simple model is formulated where the channel is Rayleigh fading, the CSI is corrupted by additive white Gaussian noise and a zero forcing precoder is formed by the noisy CSI. Detailed analysis of the ergodic rate and outage probability of the system is given. Particular attention is given to system behavior at asymptotically high SNR. Next, a method to construct precoders in a manner that accounts for the uncertainty in the channel information is developed. A framework is introduced that allows one to quantify the tradeoff between the risk (due to the CSI randomness) that is associated with a precoder and the resulting transmission rate. Using ideas from modern portfolio theory, the risk-rate problem is modified to a tractable mean-variance optimization problem. Thus, we give a method that allows one to efficiently find a good precoder in the risk-rate sense. The technique is quite general and applies to a wide range of CSI probability distributions.
10

Multiple Antenna Broadcast Channels with Random Channel Side Information

Shalev Housfater, Alon 11 January 2012 (has links)
The performance of multiple input single output (MISO) broadcast channels is strongly dependent on the availability of channel side information (CSI) at the transmitter. In many practical systems, CSI may be available to the transmitter only in a corrupted and incomplete form. It is natural to assume that the flaws in the CSI are random and can be represented by a probability distribution over the channel. This work is concerned with two key issues concerning MISO broadcast systems with random CSI: performance analysis and system design. First, the impact of noisy channel information on system performance is investigated. A simple model is formulated where the channel is Rayleigh fading, the CSI is corrupted by additive white Gaussian noise and a zero forcing precoder is formed by the noisy CSI. Detailed analysis of the ergodic rate and outage probability of the system is given. Particular attention is given to system behavior at asymptotically high SNR. Next, a method to construct precoders in a manner that accounts for the uncertainty in the channel information is developed. A framework is introduced that allows one to quantify the tradeoff between the risk (due to the CSI randomness) that is associated with a precoder and the resulting transmission rate. Using ideas from modern portfolio theory, the risk-rate problem is modified to a tractable mean-variance optimization problem. Thus, we give a method that allows one to efficiently find a good precoder in the risk-rate sense. The technique is quite general and applies to a wide range of CSI probability distributions.

Page generated in 0.0474 seconds