• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 192
  • 137
  • 2
  • Tagged with
  • 329
  • 302
  • 301
  • 290
  • 229
  • 214
  • 73
  • 69
  • 15
  • 11
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Computer Assisted Pronunciation Training : Evaluation of non-native vowel length pronunciation

Versvik, Eivind January 2009 (has links)
<p>Computer Assisted Pronunciation Training systems have become popular tools to train on second languages. Many second language learners prefer to train on pronunciation in a stress free environment with no other listeners. There exists no such tool for training on pronunciation of the Norwegian language. Pronunciation exercises in training systems should be directed at important properties in the language which the second language learners are not familiar with. In Norwegian two acoustically similar words can be contrasted by the vowel length, these words are called vowel length words. The vowel length is not important in many other languages. This master thesis has examined how to make the part of a Computer Assisted Pronunciation Training system which can evaluate non-native vowel length pronunciations. To evaluate vowel length pronunciations a vowel length classifier was developed. The approach was to segment utterances using automatic methods (Dynamic Time Warping and Hidden Markov Models). The segmented utterances were used to extract several classification features. A linear classifier was used to discriminate between short and long vowel length pronunciations. The classifier was trained by the Fisher Linear Discriminant principle. A database of Norwegian words of minimal pairs with respect to vowel length was recorded. Recordings from native Norwegians were used for training the classifier. Recordings from non-natives (Chinese and Iranians) were used for testing, resulting in an error rate of 6.7%. Further, confidence measures were used to improve the error rate to 3.4% by discarding 8.3% of the utterances. It could be argued that more than half of the discarded utterances were correctly discarded because of errors in the pronunciation. A CAPT demo, which was developed in an former assignment, was improved to use classifiers trained with the described approach.</p>
102

A control toolbox for measuring audiovisual quality of experience

Bækkevold, Stian January 2009 (has links)
<p>Q2S is an organization dedicated to measure perceived quality of multimedia content. In order make such measurements, subjective assessments is held where a test subject gives rating based on the perceived, subjective quality of the presented multimedia content. Subjective quality assessments are important in order to achieve a high rate of user satisfaction when viewing multimedia presentations. Human perception of quality, if quantified, can be used to adjust presented media to maximize the user experience, or even improve compression techniques with respect to human perception. In this thesis, software for setting up subjective assessments using a state-of-the-art video clip recorder has been developed. The software has been custom made to ensure compatibility with the hardware Q2S has available. Development has been done in Java. To let the test subject give feedback about the presented material, a MIDI device is available. SALT, an application used to log MIDI messages, has been integrated in the software to log user activiy. This report will outline the main structure of the software that has been developed during the thesis. The important elements of the software structure will be explained in detail. The tools that have been used will be discussed, focusing on the parts that have been used in the thesis. Problems with both hardware and software will be documented, as well as workarounds and limitations for the software developed.</p>
103

Best practice for Games Design : Guidelines for ensuring quality

Christie, Petter Tobias Grindstad January 2009 (has links)
<p>The gaming industry is getting bigger for each year and new games hit the market almost every week. But what makes a game a success or a flop? In the end it is the users, gamers, that have decide if they think a game is good or not. But what do a gamer want form a game? This thesis looks mainly into what the game players like when it comes to communication in multiplayer games. It also looks at players’ preferences for using the graphic settings to control the flow in the game (the frames per second), and preferences when it comes to music and sound in the game. A questionnaire was used to collect the quantitative data, while user testing was used to get qualitative data on the effect of using voice over IP (VoIP) when playing a multiplayer game. The game used for testing is Age of Conan from Funcom. The data collected are analyzed to get results. These results are used to make guidelines that can be used by game producers and developers when making new games. These guidelines are what are deemed best practices based on the data collected. The research concludes that VoIP has an important role in multiplayer games and that game producers should be conscious on the use of VoIP when making multiplayer games. For the use of graphic settings to control the flow the conclusion is that this is common to do and important for the game experience. Lastly it concludes that music is very important for the mood of the game.</p>
104

Queue Management and Interference control for Cognitive Radio

Håland, Pål January 2009 (has links)
<p>In this report I will look at the possibility of using a sensor network to control the interference to primary users made by secondary users. I'm going to use two Rayleigh fading channels, one to simulate the channel between the secondary transmitter and the sensor, and another to simulate the channel between the secondary transmitter and secondary receiver. I assume that the system is either using multiple antennas or that the secondary transmitter is moving relative to the sensor and primary user so that the channels share the same statistics. If the interference level gets too high at the sensor it should limit the transmission power at the secondary transmitter. And when it reaches a low level, the secondary transmitter can transmit with a higher power, depending on the channel between the two secondary users. I will study where the system stabilize. What the different variables control in the system. How the factor between the signal received at the sensor and the signal received at the secondary user are for different arrival rates. In the results i found out that small arrival rates have the highest efficiency compared to power at the secondary user and the sensor. When using a peak power constrain it helped stabilizing the system.</p>
105

Acoustic communication for use in underwater sensor networks

Haug, Ole Trygve January 2009 (has links)
<p>In this study an underwater acoustic communications system has been simulated. The simulations has been performed through use of a simulation program called EasyPLR that is based on the PlaneRay propagation model. In the simulations different pulse shapes have been tested for use in underwater communication. Different types of loss have also been studied for different carrier frequencies. Changing the carrier frequency from 20 kHz to 75 kHz gives a huge difference in both absorption loss and reflection loss. This means that there will be a tradeoff between having a high frequency for high data rate and reducing the carrier frequency to reduce the loss. The modulation technique used in this study is Quadrature phase shift keying and different sound speed profiles have been tested to see how this affects the performance. The transmission distance has been tested for several distances up to 3 km. The results show a significant difference in the performances at 1 km and 3 km for the same noise level. Direct sequence spread spectrum with Quadrature phase shift keying has also been simulated for different distances with good performance. The challenge is to get good time synchronization, and the performance is much better at 1 km than at 3 km.</p>
106

Speech Analysis for Automatic Speech Recognition

Alcaraz Meseguer, Noelia January 2009 (has links)
<p>The classical front end analysis in speech recognition is a spectral analysis which parametrizes the speech signal into feature vectors; the most popular set of them is the Mel Frequency Cepstral Coefficients (MFCC). They are based on a standard power spectrum estimate which is first subjected to a log-based transform of the frequency axis (mel- frequency scale), and then decorrelated by using a modified discrete cosine transform. Following a focused introduction on speech production, perception and analysis, this paper gives a study of the implementation of a speech generative model; whereby the speech is synthesized and recovered back from its MFCC representations. The work has been developed into two steps: first, the computation of the MFCC vectors from the source speech files by using HTK Software; and second, the implementation of the generative model in itself, which, actually, represents the conversion chain from HTK-generated MFCC vectors to speech reconstruction. In order to know the goodness of the speech coding into feature vectors and to evaluate the generative model, the spectral distance between the original speech signal and the one produced from the MFCC vectors has been computed. For that, spectral models based on Linear Prediction Coding (LPC) analysis have been used. During the implementation of the generative model some results have been obtained in terms of the reconstruction of the spectral representation and the quality of the synthesized speech.</p>
107

Investigation of submerged maritime target detection using LIDAR

Ayala Fernández, Unai, Hernández, Luis Manuel January 2009 (has links)
<p>Lidar is an optical remote sensing technology which uses the backscattered light to create information profiles of the scanning area. Normally the air is used as propagation medium, but in this work the Lidar's efficiency to detect submerged target in water is discussed.   Following the theories of light propagation in the air and in the water a model to simulate the target detection is created. The values of scattering and absorption of the laser pulse in water are estimated by Morel equations which give accurate values of the sea water properties. Scattering and absorption define the optical properties of the medium, so the attenuation and the backscattering coefficient are calculated. These value will have a strong dependency to the salinity, pressure, temperature, sea water constituents and so on.   After the estimation of the parameters a model based on Lidar Equation, Fresnel Equations and Snell´s law has been developed with the aim of predict the maximum range to detect the sea surface and the maximum depth to detect the sea bottom.     In order to verify the goodness of the model, a prototype 532nm Lidar system has been used to collect experimental data. The Lidar was used from a 50m high building scanning from near vertical incidence to near horizontal incidence.   The extracted data from the simulations have been compared with the data obtained from realized test. This has given  us a predicted maximum range to detect the sea surface of 220m and an estimated  maximum depth for a reference target of 17m.</p>
108

Localization of Sounds in the Horizontal Plane using the QuietPro System

Tønsberg, Øyvind January 2009 (has links)
<p>This report studies the effect of electronic hearing protectors on directional hearing in the horizontal plane. 11 subjects participated in a sound localization test, sitting in the center of a circle of 24 speakers. The test was run once with open ears, and once wearing QuietPro earplugs from Nacre. The subjects were presented with 195 trials consisting of a randomly located 150 ms broadband noise burst, and instructed to identify the source location. The results show that there was a significant increase in localization errors when wearing the electronic hearing protectors, particularly due to an increase in source reversals. Large individual differences between subjects was also observed in this occluded condition.</p>
109

Optimisation du réseau de transmission Bouygues Telecom Nord Est

Albareil, Coralie January 2006 (has links)
<p>Ce travail consiste en la description du procédé utilisé lorsqu'une modification du réseau de transmission doit être faite. Il s'agit ici d'optimisation du réseau existant.</p>
110

Adaptive Frequency Hopping with Channel Prediction

Flåm, John Torjus January 2006 (has links)
<p>The number of radio systems operating in the 2.4 GHz band is rising as a result of increased usage of wireless technologies. Since such devices interfere with one another, satisfactory co-existence becomes important. Several techniques serve to reduce the interference. Included among these are frequency hopping (FH) and power-control. This report focuses only on FH, and particularly on methods that make FH schemes adaptive. An FH scheme is adaptive if it responds to the noise and fading by avoiding channels that are unfit for transmission. An example of such a scheme is already implemented in an audio transceiver, the nRF24Z1, manufactured by Nordic Semiconductor. That transceiver provides the framework for this study, and the main objective is to suggest improvements to its FH algorithm. Better performance is particularly interesting in high quality audio streaming because such transmissions generally have strict real time requirements. Thus, the time to retransmit corrupted data is limited, and measures to reduce the impact of interference and fading are desired. The FH scheme implemented in the nRF24Z1 works broadly as follows: If a channel distorts more than a certain share of the transmitted data, it is extracted from the FH routine and listed as banned for usage. The ban list has room for maximum 18 out of 38 channels and can therefore filter out significant parts of the spectrum. If the system identifies more poor channels than the list can hold, the oldest channel in the ban list is released, and the newly identified one takes its place. In a scenario where noise and deep fades come to occupy a rather stable group of channels, the banned channels will match the unsuited parts of the spectrum quite accurately, and the scheme performs well. However, when the noise and fading is changing, maybe quickly and non-periodically, the performance drops significantly. The reason is that channels are banned only after they have caused trouble, which has two negative effects. Firstly, it is likely that the bulk of the transmitted data was distorted, and the need for retransmission can therefore be large. Secondly, since the transmission conditions are changing, the ban list becomes outdated and reflects the actual interference and fading poorly. Therefore, in this report, the possibility of predicting poor channels in order to avoid them beforehand, is investigated. For the purpose of prediction, small test packets are transmitted. In short, the principle of operation is that if a test packet is readable at the receiver, the channel is used. Otherwise it is avoided. Computer simulations indicate that this technique improves transmission conditions and reduces the need for retransmission when the noise and fading change significantly. Large changes are indeed common in practice. They occur, for example, if a broadband interferer is switched off or greatly varies its output power. Plainly, they could also come about when objects move. Despite promising simulations, channel testing does not come without side effects. An audio streaming system like the nRF24Z1 must secure a certain flow of data to avoid audible errors. If prediction algorithms are to secure that flow, a compromise must be made: the more time a system spends on channel testing, the less time remains for transmission of data. Therefore, at some point, testing must be terminated to leave room for the real job. In consequence, the key issue of finding the best trade-off between testing and transmission must be addressed. This report presents three adaptive FH schemes that approach that issue in their own manner. The performance of the proposed prediction schemes has been investigated using a channel model for the ISM band (Industrial, Medical, and Scientific). It is coded and developed in MATLAB. The model mimics the effects of a real mobile channel quite well, and this inspires non-negligible confidence in the simulation results.</p>

Page generated in 0.0296 seconds