Spelling suggestions: "subject:"elektronikk"" "subject:"elektronikken""
151 |
Modelling, Simulation and Implementation Considerations of High Speed Continuous Time Sigma Delta ADCKaald, Rune January 2008 (has links)
<p>A found state of the art Continuous Time Sigma Delta ADC is modelled and simulated for the presence of nonidealities. A comparison between two Excess Loop Delay compensation techniques is done, the digital differentiation technique was found to have lower swing at the last integrator, and did not need a gain-bandwidth induced delay sensitive summing amplifier. The detrimental influence of clock jitter is shown. Different DAC linearization techniques are discussed, the DWA algorithm was simulated and found to be the best choice for linearizing the DACs. Through high level modeling in Simulink and verification in the Cadence framework specifications for each building block was determined, a final simulation resulted in a SNDR of 76.3 dB.</p>
|
152 |
Reduction of speckle contrast in HDTV laser projection display.Apeland, Knut Øyvind January 2008 (has links)
<p>Abstract In this thesis the focus has been on laser speckle. It is done in collaboration with poLight. They are developing a projector, where laser light is the source of illumination. In such projectors, laser speckle degrades the image quality. The aim of this project is to construct a speckle reduction device to be used in the laser projector. The theory covers a description of laser speckle, how to reduce the speckle contrast, and five methods to so. We explain why speckle arises and which parameters we can manipulate to reduce the speckle contrast. The five speckle reduction methods included in this thesis are; vibrating diffuser, slowly moving diffuser, Hadamard matrices, scattering tube, and vibrating mirror. Large vibrational motions are unwanted, considering the size of the device, generation of noise, and problems with alignment of the optical components in the projector that this would lead to. The quality of the laser beam is prominent in order to produce a sharp image, thus the use of diffusers with large scattering angles is not a good solution. The scattering tubes, designed by poLight, are tubes filled with micro pearls in a polymer gel. The size of the pearls decides the nature of the scattering. Larger pearls will give less back scattering and more light transmitted in the forward direction. If the tubes are rotated in a well balanced device we can avoid generating vibrations. The Hadamard matrices is the only one of the five methods which is not based on a motion. The challenge is to find a SLM to implement the matrices. It requires a low response time in order to present enough matrices during the exposure time of the eye. The laboratory setup we use to measure the speckle contrast is an improved version of the setup constructed in the specialisation project. A screen was removed from the old setup, and the speckle is now imaged directly from the speckle reduction device. The measured speckle reduction is thus due to the device alone, and not affected by the screen. The results were reproducible and in agreement with what we expected. We implemented a vibrating diffuser, both the single and the slowly moving. A piece cut from a plastic bag and some Scotch Magic tape were used as diffusers. The tape is the strongest diffuser and gives the lowest speckle contrast, however, it also has the largest scattering angle. The single tape diffuser reduced the speckle contrast to $C = 0.112$. With two tape difusers in series the intensity in the images becomes too low to exploit the dynamic range of the CCD sensor. The result is a higher calcualted speckle contrast with two diffusers, $C=0.131$, even though it ought to be smaller. We tested five prototypes of the scattering tube with different concentrations. The tube with the highest concentration has the highest speckle reduction abilities. It also has the strongest scattering effect. The scattering is less than with the tape diffuser, and so is the speckle reduction. The speckle contrast is reduced to $C=0.320$ when the tube is rotated, and to $C=0.389$ when it is vibrated. The tubes was also tested in series with a ground glass. The ground glass acted as a second diffuser. In this setting, vibration and rotation of the tubes reduced the speckle contrast equally, $C approx 0.283$ From the measured speckle contrast of the diffusers and tubes in stationary conditions, a polarization analysis should show a depolarization of the laser beam. This were the case only for the plastic diffuser. It is assumed that the error lays with the polarization analysis. There should be a depolarization in the tape and a partial depolarization in the tubes. A calculation of the speckle size was performed as well. Based on the theory we expected the size of the speckle grains to be $sigma_s = 37.77~mu m$. From the Fourier analysis of a speckle image from the setup we calculated the speckle size to be $sigma_s = 5.35$~mm, which is approximately 140 times bigger. The expected speckle size is too small, because we did not take into account a small magnification in the setup. The Fourier analysis of discrete and limited sets of data points is probably the main explanation of the difference, but a more thorough study is needed.</p>
|
153 |
Sensor Array Signal Processing for Source LocalizationManzano García-Muñoz, Cristina January 2008 (has links)
<p>This work is a study about source localization methods, more precisely, about beamforming approaches. The necessary background theory is provided first, and then, further developed to explain the basis of each approach. The studied problem consists in an array of sensors in which the signal to process is impinging. Several examples of inciding signals are provided in order to compare the performance of the methods. The goal of the approaches is to find the Incident Signal Power and the Direction Of Arrival of the Signal (or Signals) Of Interest. With these information, the source can be located in angle and range. After the study, the conclusions will show which methods to chose depending on the application pursued. Finally, some ideas or guidelines about future investigation on the field, will be given.</p>
|
154 |
Objective Image Quality Metrics for Ultrasound ImagingSimpson, Cecilie Øinæs January 2009 (has links)
<p>Objective evaluation of the image quality on ultrasound images is a comprehensive task due to the relatively low image quality compared to other imaging techniques. It is desirable to objectively determine the quality of ultrasound images since quantification of the quality removes the subjective evaluation which can lead to varying results. The scanner will also be more user friendly if the user is given feedback on the quality of the current image. This thesis has investigated in the objective evaluation of image quality in phantom images. It has been emphasized on the parameter spatial variance which is incorporated in the image analysis system developed during the project assignment. The spatial variance was tested for a variety of settings as for instance different beam densities and number of MLAs. In addition, different power spectra have been evaluated related to the ProbeContact algorithm developed by the Department of Circulation and Medical Imaging (ISB). The algorithm has also been incorporated in the image analysis system. The results show that the developed algorithm gives a good indication of the spatial variance. An image gets more and more spatially variant as the beam density decreases. If the beam density goes below the Nyquist sampling limit, the point target will appear to move more slowly when passing a beam since the region between the two beams are undersampled. This effect will be seen in the correlation coefficient plots which is used as a measure of spatial variance. The results from the calculations related to the ProbeContact algorithm show that rearranging the order of the averaging and the Fourier transformation will have an impact on the calculated probe contact, but the differences are tolerable. All the evaluated methods can be used, but performing Fourier transform before averaging can be viewed as the best solution since it gives a lateral power spectrum with low variance and a smooth mean frequency and bandwidth when they are compared for several frames. This is suggested with the reservations of that basic settings are used. Performing 1D (in the lateral direction) or 2D Fourier transform before averaging will not have any impact of the resulting power spectrum as long as normalized Fourier tranform is used. The conclusion is that the image analysis system, including the spatial variance parameter, is a good tool for evaluating various parameters related to image quality. The system is improved by the ProbeContact algorithm which gives a good indication of the image quality based on the acoustic contact of the probe. Even though the image analysis system is limited to phantom images, the thesis is a starting point in the process of obtaining objective evaluation of the image quality in clinical images since others may use it as a basis for their work.</p>
|
155 |
Real-Time JPEG2000 Video Decoding on General-Purpose Computer HardwareHalsteinli, Erlend January 2009 (has links)
<p>There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.</p>
|
156 |
An exploration of user needs and experiences towards an interactive multi-view video presentationDanielsen, Eivind January 2009 (has links)
<p>After a literature review about multi-view video technologies, it was focused on a multi-view video presentation where the user receives multiple video streams and can freely switch between them. User interaction was considered to be a key function for this system. The goal was to explore user needs and expectations towards an interactive multi-view video presentation. A multi-view video player was implemented according to specifications in possible scenarios and users needs and expectations conducted through an online survey. The media player was written in objective-C, Cocoa and was developed using the integrated development environment tool XCode and graphics user interface tool Interface Builder. The media player was built around Quicktime's framework QTKit. A plugin tool, Perian, added extra media format support to QuickTime. The results from the online survey shows that the minority has experience with such a multi-view video presentation. However, those who had tried multi-view video are positive towards it. The usage of the system is strongly dependent on content. The content should be highly entertainment- and action-oriented. Switching of views was to be considered a key feature by experienced users of the conducted test of the multi-view video player. This feature provides a more interactive application and more satisfied users, when the content is suitable for multi-view video. However, rearranging and hiding of views also contributed to a positive viewing experience. However, it is important to notice that these results are not complete in order to fully investigate users need and expectations towards an interactive multi-view video presentation.</p>
|
157 |
Framework for self reconfigurable system on a Xilinx FPGA.Hamre, Sverre January 2009 (has links)
<p>Partial self reconfigurable hardware has not yet become main stream, even though the technology is available. Currently FPGA manufacturer like Xilinx has FPGA devices that can do partial self reconfiguration. These and earlier FPGA devices were used mostly for prototyping and testing of designs, before producing ASICS, since FPGA devices was to expensive to be used in final production designs. Now as prices for these devices are coming down, it is more and more normal to see them in consumer devices. Like routers and switches where protocols can change fast. Using a FPGA in these devices, the manufacturer has the possibility to update the device if there are protocol updates or bugs in the design. But currently this reconfiguration is of the complete design not just modules when they are needed. The main problem why partial self reconfiguration is not used currently, is the lack of tools, to simplify the design and usage of such a system. In this thesis different aspects of partial self reconfiguration will be evaluated. Current research status are evaluated and a proof of concept incorporating most of this research are created. Trying to establish a framework for partial self reconfiguration on a FPGA. In the work the Suzaku-V platform is used, this platform utilizes a Virtex-II or Virtex-IV FPGA from Xilinx. To be able to partially reconfigure these FPGA's the configuration logic and configuration bitstream has been researched. By understanding the bitstream a program has been developed that can read out or insert modules in a bitstream. The partial reconfiguration in the proof of concept is controlled by a CPU on the FPGA running Linux. By running Linux on the CPU simplifies many aspects of development, since many programs and communication methods are readily available in Linux. Partial self reconfiguration on a FPGA with a hard core powerPC running Linux is a complicated task to solve. Many problems were encounter working with the task, hopefully were many of these issues addressed and answered, simplifying further work. Since this is only the beginning, showing that it is possible and how it can be done, but more research must be done to further simplify and enhance the framework.</p>
|
158 |
Computer Assisted Pronunciation Training : Evaluation of non-native vowel length pronunciationVersvik, Eivind January 2009 (has links)
<p>Computer Assisted Pronunciation Training systems have become popular tools to train on second languages. Many second language learners prefer to train on pronunciation in a stress free environment with no other listeners. There exists no such tool for training on pronunciation of the Norwegian language. Pronunciation exercises in training systems should be directed at important properties in the language which the second language learners are not familiar with. In Norwegian two acoustically similar words can be contrasted by the vowel length, these words are called vowel length words. The vowel length is not important in many other languages. This master thesis has examined how to make the part of a Computer Assisted Pronunciation Training system which can evaluate non-native vowel length pronunciations. To evaluate vowel length pronunciations a vowel length classifier was developed. The approach was to segment utterances using automatic methods (Dynamic Time Warping and Hidden Markov Models). The segmented utterances were used to extract several classification features. A linear classifier was used to discriminate between short and long vowel length pronunciations. The classifier was trained by the Fisher Linear Discriminant principle. A database of Norwegian words of minimal pairs with respect to vowel length was recorded. Recordings from native Norwegians were used for training the classifier. Recordings from non-natives (Chinese and Iranians) were used for testing, resulting in an error rate of 6.7%. Further, confidence measures were used to improve the error rate to 3.4% by discarding 8.3% of the utterances. It could be argued that more than half of the discarded utterances were correctly discarded because of errors in the pronunciation. A CAPT demo, which was developed in an former assignment, was improved to use classifiers trained with the described approach.</p>
|
159 |
A control toolbox for measuring audiovisual quality of experienceBækkevold, Stian January 2009 (has links)
<p>Q2S is an organization dedicated to measure perceived quality of multimedia content. In order make such measurements, subjective assessments is held where a test subject gives rating based on the perceived, subjective quality of the presented multimedia content. Subjective quality assessments are important in order to achieve a high rate of user satisfaction when viewing multimedia presentations. Human perception of quality, if quantified, can be used to adjust presented media to maximize the user experience, or even improve compression techniques with respect to human perception. In this thesis, software for setting up subjective assessments using a state-of-the-art video clip recorder has been developed. The software has been custom made to ensure compatibility with the hardware Q2S has available. Development has been done in Java. To let the test subject give feedback about the presented material, a MIDI device is available. SALT, an application used to log MIDI messages, has been integrated in the software to log user activiy. This report will outline the main structure of the software that has been developed during the thesis. The important elements of the software structure will be explained in detail. The tools that have been used will be discussed, focusing on the parts that have been used in the thesis. Problems with both hardware and software will be documented, as well as workarounds and limitations for the software developed.</p>
|
160 |
Construction of digital integer arithmetic : FPGA implementation of high throughput pipelined division circuitØvergaard, Johan Arthur January 2009 (has links)
<p>This assignment has been given by Defence Communication (DC) which is a division of Kongsberg Defence and Aerospace(KDA). KDA develops amongst other things military radio equipment for communication and data transfer. In this equipment there is use of digital logic that performes amongst other things integer and fixed point division. Current systems developed at KDA uses both application specific integrated circuit (ASIC) and field programmable gate arrays (FPGA) to implement the digital logic. In both these technologies it is implemented circuit to performed integer and fixed point division. These are designed for low latency implementations. For future applications it is desire to investigate the possibility of implementing a high throughput pipelined division circuit for both 16 and 64 bit operands. In this project several commonly implemented division methods and algorithms has been studied, amongst others digit recurrence and multiplicative algorithms. Of the studied methods, multiplicative methods early stood out as the best implementation. These methods include the Goldschmidt and Newton-Raphson method. Both these methods require and initial approximation towards the correct answer. Based on this, several methods for finding an initial approximation were investigated, amongst others bipartite and multipartite lookup tables. Of the two multiplicative methods, Newton-Raphsons method proved to give the best implementation. This is due to the fact that it is possible with Newton-Raphsons method to implement each stage with the same bit widths as the precision out of that stage. This means that each stage is only halve the size of the succeeding stage. Also since the first stages were found to be small compared to the last stage, it was found that it is best to use a rough approximation towards the correct value and then use more stages to achieve the target precision. To evaluate how different design choices will affect the speed, size and throughput of an implementation, several configurations were implemented in VHDL and synthesized to FPGAs. These implementations were optimized for high speed whit high pipeline depth and size, and low speed with low pipeline depth and size. This was done for both 16 and 64 bits implementations. The synthesizes showed that there is possible to achieve great speed at the cost of increased size, or a small circuit while still achieving an acceptable speed. In addition it was found that it is optimal in a high throughput pipelined division circuit to use a less precise initial approximation and instead use more iterations stages.</p>
|
Page generated in 0.065 seconds