Spelling suggestions: "subject:"sie"" "subject:"sien""
231 |
An exploration of user needs and experiences towards an interactive multi-view video presentationDanielsen, Eivind January 2009 (has links)
After a literature review about multi-view video technologies, it was focused on a multi-view video presentation where the user receives multiple video streams and can freely switch between them. User interaction was considered to be a key function for this system. The goal was to explore user needs and expectations towards an interactive multi-view video presentation. A multi-view video player was implemented according to specifications in possible scenarios and users needs and expectations conducted through an online survey. The media player was written in objective-C, Cocoa and was developed using the integrated development environment tool XCode and graphics user interface tool Interface Builder. The media player was built around Quicktime's framework QTKit. A plugin tool, Perian, added extra media format support to QuickTime. The results from the online survey shows that the minority has experience with such a multi-view video presentation. However, those who had tried multi-view video are positive towards it. The usage of the system is strongly dependent on content. The content should be highly entertainment- and action-oriented. Switching of views was to be considered a key feature by experienced users of the conducted test of the multi-view video player. This feature provides a more interactive application and more satisfied users, when the content is suitable for multi-view video. However, rearranging and hiding of views also contributed to a positive viewing experience. However, it is important to notice that these results are not complete in order to fully investigate users need and expectations towards an interactive multi-view video presentation.
|
232 |
Framework for self reconfigurable system on a Xilinx FPGA.Hamre, Sverre January 2009 (has links)
Partial self reconfigurable hardware has not yet become main stream, even though the technology is available. Currently FPGA manufacturer like Xilinx has FPGA devices that can do partial self reconfiguration. These and earlier FPGA devices were used mostly for prototyping and testing of designs, before producing ASICS, since FPGA devices was to expensive to be used in final production designs. Now as prices for these devices are coming down, it is more and more normal to see them in consumer devices. Like routers and switches where protocols can change fast. Using a FPGA in these devices, the manufacturer has the possibility to update the device if there are protocol updates or bugs in the design. But currently this reconfiguration is of the complete design not just modules when they are needed. The main problem why partial self reconfiguration is not used currently, is the lack of tools, to simplify the design and usage of such a system. In this thesis different aspects of partial self reconfiguration will be evaluated. Current research status are evaluated and a proof of concept incorporating most of this research are created. Trying to establish a framework for partial self reconfiguration on a FPGA. In the work the Suzaku-V platform is used, this platform utilizes a Virtex-II or Virtex-IV FPGA from Xilinx. To be able to partially reconfigure these FPGA's the configuration logic and configuration bitstream has been researched. By understanding the bitstream a program has been developed that can read out or insert modules in a bitstream. The partial reconfiguration in the proof of concept is controlled by a CPU on the FPGA running Linux. By running Linux on the CPU simplifies many aspects of development, since many programs and communication methods are readily available in Linux. Partial self reconfiguration on a FPGA with a hard core powerPC running Linux is a complicated task to solve. Many problems were encounter working with the task, hopefully were many of these issues addressed and answered, simplifying further work. Since this is only the beginning, showing that it is possible and how it can be done, but more research must be done to further simplify and enhance the framework.
|
233 |
Computer Assisted Pronunciation Training : Evaluation of non-native vowel length pronunciationVersvik, Eivind January 2009 (has links)
Computer Assisted Pronunciation Training systems have become popular tools to train on second languages. Many second language learners prefer to train on pronunciation in a stress free environment with no other listeners. There exists no such tool for training on pronunciation of the Norwegian language. Pronunciation exercises in training systems should be directed at important properties in the language which the second language learners are not familiar with. In Norwegian two acoustically similar words can be contrasted by the vowel length, these words are called vowel length words. The vowel length is not important in many other languages. This master thesis has examined how to make the part of a Computer Assisted Pronunciation Training system which can evaluate non-native vowel length pronunciations. To evaluate vowel length pronunciations a vowel length classifier was developed. The approach was to segment utterances using automatic methods (Dynamic Time Warping and Hidden Markov Models). The segmented utterances were used to extract several classification features. A linear classifier was used to discriminate between short and long vowel length pronunciations. The classifier was trained by the Fisher Linear Discriminant principle. A database of Norwegian words of minimal pairs with respect to vowel length was recorded. Recordings from native Norwegians were used for training the classifier. Recordings from non-natives (Chinese and Iranians) were used for testing, resulting in an error rate of 6.7%. Further, confidence measures were used to improve the error rate to 3.4% by discarding 8.3% of the utterances. It could be argued that more than half of the discarded utterances were correctly discarded because of errors in the pronunciation. A CAPT demo, which was developed in an former assignment, was improved to use classifiers trained with the described approach.
|
234 |
A control toolbox for measuring audiovisual quality of experienceBækkevold, Stian January 2009 (has links)
Q2S is an organization dedicated to measure perceived quality of multimedia content. In order make such measurements, subjective assessments is held where a test subject gives rating based on the perceived, subjective quality of the presented multimedia content. Subjective quality assessments are important in order to achieve a high rate of user satisfaction when viewing multimedia presentations. Human perception of quality, if quantified, can be used to adjust presented media to maximize the user experience, or even improve compression techniques with respect to human perception. In this thesis, software for setting up subjective assessments using a state-of-the-art video clip recorder has been developed. The software has been custom made to ensure compatibility with the hardware Q2S has available. Development has been done in Java. To let the test subject give feedback about the presented material, a MIDI device is available. SALT, an application used to log MIDI messages, has been integrated in the software to log user activiy. This report will outline the main structure of the software that has been developed during the thesis. The important elements of the software structure will be explained in detail. The tools that have been used will be discussed, focusing on the parts that have been used in the thesis. Problems with both hardware and software will be documented, as well as workarounds and limitations for the software developed.
|
235 |
Construction of digital integer arithmetic : FPGA implementation of high throughput pipelined division circuitØvergaard, Johan Arthur January 2009 (has links)
This assignment has been given by Defence Communication (DC) which is a division of Kongsberg Defence and Aerospace(KDA). KDA develops amongst other things military radio equipment for communication and data transfer. In this equipment there is use of digital logic that performes amongst other things integer and fixed point division. Current systems developed at KDA uses both application specific integrated circuit (ASIC) and field programmable gate arrays (FPGA) to implement the digital logic. In both these technologies it is implemented circuit to performed integer and fixed point division. These are designed for low latency implementations. For future applications it is desire to investigate the possibility of implementing a high throughput pipelined division circuit for both 16 and 64 bit operands. In this project several commonly implemented division methods and algorithms has been studied, amongst others digit recurrence and multiplicative algorithms. Of the studied methods, multiplicative methods early stood out as the best implementation. These methods include the Goldschmidt and Newton-Raphson method. Both these methods require and initial approximation towards the correct answer. Based on this, several methods for finding an initial approximation were investigated, amongst others bipartite and multipartite lookup tables. Of the two multiplicative methods, Newton-Raphsons method proved to give the best implementation. This is due to the fact that it is possible with Newton-Raphsons method to implement each stage with the same bit widths as the precision out of that stage. This means that each stage is only halve the size of the succeeding stage. Also since the first stages were found to be small compared to the last stage, it was found that it is best to use a rough approximation towards the correct value and then use more stages to achieve the target precision. To evaluate how different design choices will affect the speed, size and throughput of an implementation, several configurations were implemented in VHDL and synthesized to FPGAs. These implementations were optimized for high speed whit high pipeline depth and size, and low speed with low pipeline depth and size. This was done for both 16 and 64 bits implementations. The synthesizes showed that there is possible to achieve great speed at the cost of increased size, or a small circuit while still achieving an acceptable speed. In addition it was found that it is optimal in a high throughput pipelined division circuit to use a less precise initial approximation and instead use more iterations stages.
|
236 |
Comparator-Based Switched-Capacitor Integrator for use in Delta-Sigma ModulatorTorgersen, Svend Bjarne January 2009 (has links)
A comparator-based switched capacitor integrator for use in a Delta Sigma ADC has been designed. Basic theory about comparator-based circuits has been presented and design equations have been developed. The integrator had a targeted performance of a bandwidth of 1.5MHz with a SNR of 80dB. Due to the lack of a complete modulator feedback system, the integrator was simulated in open-loop. For the integrator not to saturate in open-loop, an overshoot calibration circuit was enabled during the simulation. This resulted in a severe deterioration of the integrated signal. The results are therefore significantly lower than expected, with a SNR of about 39dB but can be expected to be better in a closed-loop simulation. The power consumption of the implemented modules is 0.43mW. However, this is without several modules which were implemented as ideal.
|
237 |
Queue Management and Interference control for Cognitive RadioHåland, Pål January 2009 (has links)
In this report I will look at the possibility of using a sensor network to control the interference to primary users made by secondary users. I'm going to use two Rayleigh fading channels, one to simulate the channel between the secondary transmitter and the sensor, and another to simulate the channel between the secondary transmitter and secondary receiver. I assume that the system is either using multiple antennas or that the secondary transmitter is moving relative to the sensor and primary user so that the channels share the same statistics. If the interference level gets too high at the sensor it should limit the transmission power at the secondary transmitter. And when it reaches a low level, the secondary transmitter can transmit with a higher power, depending on the channel between the two secondary users. I will study where the system stabilize. What the different variables control in the system. How the factor between the signal received at the sensor and the signal received at the secondary user are for different arrival rates. In the results i found out that small arrival rates have the highest efficiency compared to power at the secondary user and the sensor. When using a peak power constrain it helped stabilizing the system.
|
238 |
Low-power microcontroller coreEriksen, Stein Ove January 2009 (has links)
Energy efficiency in embedded processors is of major importance in order to achieve longer operating time for battery operated devices. In this thesis the energy efficiency of a microcontroller based on the open source ZPU microprocessor is evaluated and improved. The ZPU microprocessor is a zero-operand stack machine originally designed for small size FPGA implementation, but in this thesis the core is synthesized for implementation with a 180nm technology library. Power estimation of the design is done both before and after synthesis in the design flow, and it is shown that power estimates based on RTL simulations (before synthesis) are 35x faster to obtain than power estimates based on gate-level simulations (after synthesis). The RTL estimates deviate from the gate-level estimates by only 15% and can provide faster design cycle iterations without sacrificing too much accuracy. The energy consumption of the ZPU microcontroller is reduced by implementing clock gating in the ZPU core and also implementing a tiny stack cache to reduce stack activity energy consumption. The result of these improvements show a 46% reduction in average power consumption. The ZPU architecture is also compared to the more common MIPS architecture, and the Plasma CPU of MIPS architecture is synthesized and simulated to serve as comparison to the ZPU microcontroller. The results of the comparison with the MIPS architecture shows that the ZPU needs on average 15x as many cycles and 3x as many memory accesses to complete the benchmark programs as the MIPS does.
|
239 |
A digital audio playback system with USB interfaceKarlsen, Espen, Tørresen, Magne January 2009 (has links)
A high performance sound card is designed and implemented using a USB enabled microcontroller and an external dataconverter. Data is retrieved either via USB or S/PDIF. The sampling clock is generated by a precision clock synthesizer. This is programmable and can be adapted to different sampling rates of USB data. The system supports 24 bit, 192 kHz audio. Signal attenuation is performed through a relay based stepped voltage divider with constant output impedance. 64 dB attenuation in steps of 1 dB is available. An extensive power supply is made to support the range of required voltages. The signal to noise ratio of the power supply was measured to be 93 dB in the audio frequency band. The microcontroller has been programmed to handle the USB communication and provision of control signals for the system. The whole system is assembled on PCBs and tested. The audio performance measurements show a dynamic range of 105 dB, measured at the system output in a noisy environment. The total harmonic distortion plus noise was 0.0011 %.
|
240 |
Acoustic communication for use in underwater sensor networksHaug, Ole Trygve January 2009 (has links)
In this study an underwater acoustic communications system has been simulated. The simulations has been performed through use of a simulation program called EasyPLR that is based on the PlaneRay propagation model. In the simulations different pulse shapes have been tested for use in underwater communication. Different types of loss have also been studied for different carrier frequencies. Changing the carrier frequency from 20 kHz to 75 kHz gives a huge difference in both absorption loss and reflection loss. This means that there will be a tradeoff between having a high frequency for high data rate and reducing the carrier frequency to reduce the loss. The modulation technique used in this study is Quadrature phase shift keying and different sound speed profiles have been tested to see how this affects the performance. The transmission distance has been tested for several distances up to 3 km. The results show a significant difference in the performances at 1 km and 3 km for the same noise level. Direct sequence spread spectrum with Quadrature phase shift keying has also been simulated for different distances with good performance. The challenge is to get good time synchronization, and the performance is much better at 1 km than at 3 km.
|
Page generated in 0.045 seconds