• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 192
  • 137
  • 2
  • Tagged with
  • 329
  • 302
  • 301
  • 290
  • 229
  • 214
  • 73
  • 69
  • 15
  • 11
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Objective Image Quality Metrics for Ultrasound Imaging

Simpson, Cecilie Øinæs January 2009 (has links)
Objective evaluation of the image quality on ultrasound images is a comprehensive task due to the relatively low image quality compared to other imaging techniques. It is desirable to objectively determine the quality of ultrasound images since quantification of the quality removes the subjective evaluation which can lead to varying results. The scanner will also be more user friendly if the user is given feedback on the quality of the current image. This thesis has investigated in the objective evaluation of image quality in phantom images. It has been emphasized on the parameter spatial variance which is incorporated in the image analysis system developed during the project assignment. The spatial variance was tested for a variety of settings as for instance different beam densities and number of MLAs. In addition, different power spectra have been evaluated related to the ProbeContact algorithm developed by the Department of Circulation and Medical Imaging (ISB). The algorithm has also been incorporated in the image analysis system. The results show that the developed algorithm gives a good indication of the spatial variance. An image gets more and more spatially variant as the beam density decreases. If the beam density goes below the Nyquist sampling limit, the point target will appear to move more slowly when passing a beam since the region between the two beams are undersampled. This effect will be seen in the correlation coefficient plots which is used as a measure of spatial variance. The results from the calculations related to the ProbeContact algorithm show that rearranging the order of the averaging and the Fourier transformation will have an impact on the calculated probe contact, but the differences are tolerable. All the evaluated methods can be used, but performing Fourier transform before averaging can be viewed as the best solution since it gives a lateral power spectrum with low variance and a smooth mean frequency and bandwidth when they are compared for several frames. This is suggested with the reservations of that basic settings are used. Performing 1D (in the lateral direction) or 2D Fourier transform before averaging will not have any impact of the resulting power spectrum as long as normalized Fourier tranform is used. The conclusion is that the image analysis system, including the spatial variance parameter, is a good tool for evaluating various parameters related to image quality. The system is improved by the ProbeContact algorithm which gives a good indication of the image quality based on the acoustic contact of the probe. Even though the image analysis system is limited to phantom images, the thesis is a starting point in the process of obtaining objective evaluation of the image quality in clinical images since others may use it as a basis for their work.
172

Real-Time JPEG2000 Video Decoding on General-Purpose Computer Hardware

Halsteinli, Erlend January 2009 (has links)
There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.
173

An exploration of user needs and experiences towards an interactive multi-view video presentation

Danielsen, Eivind January 2009 (has links)
After a literature review about multi-view video technologies, it was focused on a multi-view video presentation where the user receives multiple video streams and can freely switch between them. User interaction was considered to be a key function for this system. The goal was to explore user needs and expectations towards an interactive multi-view video presentation. A multi-view video player was implemented according to specifications in possible scenarios and users needs and expectations conducted through an online survey. The media player was written in objective-C, Cocoa and was developed using the integrated development environment tool XCode and graphics user interface tool Interface Builder. The media player was built around Quicktime's framework QTKit. A plugin tool, Perian, added extra media format support to QuickTime. The results from the online survey shows that the minority has experience with such a multi-view video presentation. However, those who had tried multi-view video are positive towards it. The usage of the system is strongly dependent on content. The content should be highly entertainment- and action-oriented. Switching of views was to be considered a key feature by experienced users of the conducted test of the multi-view video player. This feature provides a more interactive application and more satisfied users, when the content is suitable for multi-view video. However, rearranging and hiding of views also contributed to a positive viewing experience. However, it is important to notice that these results are not complete in order to fully investigate users need and expectations towards an interactive multi-view video presentation.
174

Computer Assisted Pronunciation Training : Evaluation of non-native vowel length pronunciation

Versvik, Eivind January 2009 (has links)
Computer Assisted Pronunciation Training systems have become popular tools to train on second languages. Many second language learners prefer to train on pronunciation in a stress free environment with no other listeners. There exists no such tool for training on pronunciation of the Norwegian language. Pronunciation exercises in training systems should be directed at important properties in the language which the second language learners are not familiar with. In Norwegian two acoustically similar words can be contrasted by the vowel length, these words are called vowel length words. The vowel length is not important in many other languages. This master thesis has examined how to make the part of a Computer Assisted Pronunciation Training system which can evaluate non-native vowel length pronunciations. To evaluate vowel length pronunciations a vowel length classifier was developed. The approach was to segment utterances using automatic methods (Dynamic Time Warping and Hidden Markov Models). The segmented utterances were used to extract several classification features. A linear classifier was used to discriminate between short and long vowel length pronunciations. The classifier was trained by the Fisher Linear Discriminant principle. A database of Norwegian words of minimal pairs with respect to vowel length was recorded. Recordings from native Norwegians were used for training the classifier. Recordings from non-natives (Chinese and Iranians) were used for testing, resulting in an error rate of 6.7%. Further, confidence measures were used to improve the error rate to 3.4% by discarding 8.3% of the utterances. It could be argued that more than half of the discarded utterances were correctly discarded because of errors in the pronunciation. A CAPT demo, which was developed in an former assignment, was improved to use classifiers trained with the described approach.
175

A control toolbox for measuring audiovisual quality of experience

Bækkevold, Stian January 2009 (has links)
Q2S is an organization dedicated to measure perceived quality of multimedia content. In order make such measurements, subjective assessments is held where a test subject gives rating based on the perceived, subjective quality of the presented multimedia content. Subjective quality assessments are important in order to achieve a high rate of user satisfaction when viewing multimedia presentations. Human perception of quality, if quantified, can be used to adjust presented media to maximize the user experience, or even improve compression techniques with respect to human perception. In this thesis, software for setting up subjective assessments using a state-of-the-art video clip recorder has been developed. The software has been custom made to ensure compatibility with the hardware Q2S has available. Development has been done in Java. To let the test subject give feedback about the presented material, a MIDI device is available. SALT, an application used to log MIDI messages, has been integrated in the software to log user activiy. This report will outline the main structure of the software that has been developed during the thesis. The important elements of the software structure will be explained in detail. The tools that have been used will be discussed, focusing on the parts that have been used in the thesis. Problems with both hardware and software will be documented, as well as workarounds and limitations for the software developed.
176

Queue Management and Interference control for Cognitive Radio

Håland, Pål January 2009 (has links)
In this report I will look at the possibility of using a sensor network to control the interference to primary users made by secondary users. I'm going to use two Rayleigh fading channels, one to simulate the channel between the secondary transmitter and the sensor, and another to simulate the channel between the secondary transmitter and secondary receiver. I assume that the system is either using multiple antennas or that the secondary transmitter is moving relative to the sensor and primary user so that the channels share the same statistics. If the interference level gets too high at the sensor it should limit the transmission power at the secondary transmitter. And when it reaches a low level, the secondary transmitter can transmit with a higher power, depending on the channel between the two secondary users. I will study where the system stabilize. What the different variables control in the system. How the factor between the signal received at the sensor and the signal received at the secondary user are for different arrival rates. In the results i found out that small arrival rates have the highest efficiency compared to power at the secondary user and the sensor. When using a peak power constrain it helped stabilizing the system.
177

Acoustic communication for use in underwater sensor networks

Haug, Ole Trygve January 2009 (has links)
In this study an underwater acoustic communications system has been simulated. The simulations has been performed through use of a simulation program called EasyPLR that is based on the PlaneRay propagation model. In the simulations different pulse shapes have been tested for use in underwater communication. Different types of loss have also been studied for different carrier frequencies. Changing the carrier frequency from 20 kHz to 75 kHz gives a huge difference in both absorption loss and reflection loss. This means that there will be a tradeoff between having a high frequency for high data rate and reducing the carrier frequency to reduce the loss. The modulation technique used in this study is Quadrature phase shift keying and different sound speed profiles have been tested to see how this affects the performance. The transmission distance has been tested for several distances up to 3 km. The results show a significant difference in the performances at 1 km and 3 km for the same noise level. Direct sequence spread spectrum with Quadrature phase shift keying has also been simulated for different distances with good performance. The challenge is to get good time synchronization, and the performance is much better at 1 km than at 3 km.
178

Speech Analysis for Automatic Speech Recognition

Alcaraz Meseguer, Noelia January 2009 (has links)
The classical front end analysis in speech recognition is a spectral analysis which parametrizes the speech signal into feature vectors; the most popular set of them is the Mel Frequency Cepstral Coefficients (MFCC). They are based on a standard power spectrum estimate which is first subjected to a log-based transform of the frequency axis (mel- frequency scale), and then decorrelated by using a modified discrete cosine transform. Following a focused introduction on speech production, perception and analysis, this paper gives a study of the implementation of a speech generative model; whereby the speech is synthesized and recovered back from its MFCC representations. The work has been developed into two steps: first, the computation of the MFCC vectors from the source speech files by using HTK Software; and second, the implementation of the generative model in itself, which, actually, represents the conversion chain from HTK-generated MFCC vectors to speech reconstruction. In order to know the goodness of the speech coding into feature vectors and to evaluate the generative model, the spectral distance between the original speech signal and the one produced from the MFCC vectors has been computed. For that, spectral models based on Linear Prediction Coding (LPC) analysis have been used. During the implementation of the generative model some results have been obtained in terms of the reconstruction of the spectral representation and the quality of the synthesized speech.
179

Investigation of submerged maritime target detection using LIDAR

Ayala Fernández, Unai, Hernández, Luis Manuel January 2009 (has links)
Lidar is an optical remote sensing technology which uses the backscattered light to create information profiles of the scanning area. Normally the air is used as propagation medium, but in this work the Lidar's efficiency to detect submerged target in water is discussed.   Following the theories of light propagation in the air and in the water a model to simulate the target detection is created. The values of scattering and absorption of the laser pulse in water are estimated by Morel equations which give accurate values of the sea water properties. Scattering and absorption define the optical properties of the medium, so the attenuation and the backscattering coefficient are calculated. These value will have a strong dependency to the salinity, pressure, temperature, sea water constituents and so on.   After the estimation of the parameters a model based on Lidar Equation, Fresnel Equations and Snell´s law has been developed with the aim of predict the maximum range to detect the sea surface and the maximum depth to detect the sea bottom.     In order to verify the goodness of the model, a prototype 532nm Lidar system has been used to collect experimental data. The Lidar was used from a 50m high building scanning from near vertical incidence to near horizontal incidence.   The extracted data from the simulations have been compared with the data obtained from realized test. This has given  us a predicted maximum range to detect the sea surface of 220m and an estimated  maximum depth for a reference target of 17m.
180

Localization of Sounds in the Horizontal Plane using the QuietPro System

Tønsberg, Øyvind January 2009 (has links)
This report studies the effect of electronic hearing protectors on directional hearing in the horizontal plane. 11 subjects participated in a sound localization test, sitting in the center of a circle of 24 speakers. The test was run once with open ears, and once wearing QuietPro earplugs from Nacre. The subjects were presented with 195 trials consisting of a randomly located 150 ms broadband noise burst, and instructed to identify the source location. The results show that there was a significant increase in localization errors when wearing the electronic hearing protectors, particularly due to an increase in source reversals. Large individual differences between subjects was also observed in this occluded condition.

Page generated in 0.0537 seconds