• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 326
  • 170
  • Tagged with
  • 496
  • 489
  • 415
  • 412
  • 229
  • 225
  • 103
  • 88
  • 78
  • 60
  • 45
  • 33
  • 29
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Low power/high performance dynamic reconfigurable filter-design

Bystrøm, Vebjørn January 2008 (has links)
The main idea behind this thesis was to optimize the multipliers in a finite impulse response (FIR) filter. The project was chosen because digital filters are very common in digital signal processing and is an exciting area to work with. The first part of the text describes some theory behind the digital filter and how to optimize the multipliers that are a part of digital filters. The substantial thing to emphasize here is the use of Canonical Signed Digits (CSD) encoding. CSD representation for FIR filters can reduce the delay and complexity of the hardware implementation. CSD-encoding reduces the amount of non-zero digits and will by this reduce the multiplication process to a few additions/subtractions and shifts. In this thesis it was designed 4 versions of the same filter, that was implemented on an FPGA, where the substantial and most interesting results were the differences between coefficients that was CSD-encoded and coefficients that was represented with 2's complement. It was shown that the filter version that had CSD-encoded coefficients used almost 20% less area then the filter version with 2's complement coefficients. The CSD-encoded filter could run on a maximum frequency of 504,032 MHz compared the other filter that could run on a maximum frequency of 249,123 MHz. One of the filters that was designed was designed using the * operator in VHDL, that proved to be the most efficient when it came to the use of number of slices and speed. The reason for this was because an FPGA has built-in multipliers, so if one has the opportunity to use the multiplier they will give the best result instead of using logic blocks on the FPGA It was also discussed a filter that has the ability to change the coefficients at run-time without starting the design from the beginning. This is an advantage because a constant coefficient multiplier requires the FPGA to be reconfigured and the whole design cycle to be re-implemented. The drawback with the dynamic multiplier is that is uses more hardware resources.
242

A Pragmatic Approach to Modulation Scaling Based Power Saving for Maximum Communication Path Lifetime in Wireless Sensor Networks

Malavia Marín, Raúl January 2008 (has links)
The interest in Wireless Sensor Networks is rapidly increasing due to their interesting advantages related to cost, coverage and network deployment. They are present in civil applications and in most scenarios depend upon the batteries which are the exclusive power source for the tiny sensor nodes. The energy consumption is an important issue for research, and many interesting projects have been developed in several areas. They focus on topology topics, Medium Access Control or physical issues. Many projects aim at the physical layer where the node's power consumption is optimized through scaling the modulation scheme used in node communications. Results show that an optimal modulation scheme can lead to the minimum power consumption over the whole wireless sensor network. A usual simplification in research is to target individual paths and not take into account the whole network. However nodes may be part of several paths, and therefore nodes closer to the sinks may consume higher amounts of energy. This fact is the chief motivation of our research, where modulation scaling over the nodes with more energy is performed in order to increase the lifetime of the nodes having lower energy reserves. Simulation results showed typical values of path lifetime expectancy of 50 to 120 percent higher than comparable power-aware methods.
243

Performance of a Multichannel Audio Correction System Outside the Sweetspot. : Further Investigations of the Trinnov Optimizer.

Wille, Joachim Olsen January 2008 (has links)
This report is a continuation of the student project "Evaluation of TrinnovOptimizer audio reproduction system". It will further investigate theproperties and function of the Trinnov Optimizer, a correction system foraudio reproduction systems. During the student project measurements wereperformed in an anechoic lab to provide information on the functionality andabilities of the Trinnov Optimizer. Massive amounts of data were recorded,and that has also been the foundation of this report. The new work that hasbeen done is by interpreting these results through the use of Matlab. The Optimizer by Trinnov [9 ] is a standalone system for reproductionof audio over a single or multiple loudspeaker setup. It is designed tocorrect frequency and phase response in addition to correcting loudspeakerplacements and cancel simple early re?ections in a multiple loudspeakersetup. The purpose of further investigating this issue was to understandmore about the sound?eld produced around the listening position, and togive more detailed results on the changes in the sound?eld after correction.Importance of correcting the system not only in the listening position, butalso in the surrounding area, is obvious because there is often more than onelistener. This report gives further insight in physical measurements ratherthan subjective statements, on the performance of a room and loudspeakercorrection device. WinMLS has been used to measure the system with single, and multiplemicrophone setups. Some results from the earlier student project are alsoin this report to verify measurement methods, and to show correspondancebetween the di?erent measuring systems. Therefore some of the data havebeen compared to the Trinnov Optimizer's own measurements and appear similar in this report. Some errors found in the initial report, the results from the phase response measurements, have also been corrected. Multiple loudspeakers in a 5.0 setup have been measured with 5 microphones on a rotating boom to measure the soundpressure over an area around the listening position. This allowed the e?ect of simple re?ections cancellation, and the ability to generate virtual sources to be investigated. For the speci?c cases that were investigated in this report, the Optimizer showed the following: ? Frequency and phase response will in every situation be optimized to the extent of the Optimizers algorithms. ? Every case shows improvement in the frequency and phase response over the whole measured area. ? Direct frontal re?ections was deconvolved up to 300Hz over the whole measured area with a radius of 56cm. ? A re?ection from the side was deconvolved roughly up to 200Hz for microphones 1 through 3, up to a radius of 31.25cm, and up to 100Hz for microphones 4 and 5. ? The ability to create virtual sources corresponds fairly to the theoretical expectations. The video sequences that were developed give an interesting new angle on the problems that were investigated. Other than looking at plots of di?erent angles which is di?cult and time consuming, the videos showed an intuitive perspective that enlightened the same issues as the common presented data of frequency and phase response measurements.
244

Ultra-Wideband Sensor-Communication

Amat Pascual, Ángel José January 2008 (has links)
One of the fundamentals concerns in wireless communications with battery operated terminals is the battery life. Basically there are two ways of reducing power consumption: the algorithms should be simple and efficiently implemented (at least in the wireless terminals), and the transmit power should be limited. In this document is considered discrete time, progressive signal transmission with feedback [ramstad]. For forward Gaussian channel, with an ideal feedback channel, the system performs according to OPTA (Optimal Performance Theoretically Attainable[berger]). In this case, with substantial bandwidth expansion through multiple retransmissions, the power can be lowered to a theoretical minimum. In the case of a non-ideal return channel the results are limited by the feedback channel's signal-to-noise ratio. Going one step forward, a more realistic view of the channel will consider fading due to multiple reflections, especially in indoors scenarios. In this thesis it is discussed how to model the channel fading and how to simulate it from different probability distributions. After, some solutions to avoid, or at least reduce, all the undesirable effects caused by the fading will be proposed. In these solutions, the fading characteristics (power and dynamic range) and the application requirements will play a vary important role in the final system design. Finally, a realistic signal will be tried to be sent in a realistic scenario. This will be audio transmission over fading channels. Then, the results will be compared in general terms to a similar equipment such as generic wireless microphone system.
245

Optimisation of a Pipeline ADC by using a low power, high resolution Flash ADC as backend.

Høye, Dag Sverre January 2008 (has links)
Flash ADCs with resolutions from 3 to 5 bits have been implemented on a transistor level. These ADCs are to be incorporated as the backend of a higher resolution Pipeline ADC. The motivation for this work has been to see how much the resolution of this backend can be increased before the power consumption becomes to high. This is beneficial in Pipeline ADCs because the number of Pipeline stages is reduced so that the throughput delay of the Pipeline ADC is also reduced. All the Flash ADCs are implemented with the same Capacitive Interpolation-technique. This technique was found to have several benificial properties as opposed to other power saving techniques applied to Flash ADCs in a project assignment done prior to this thesis. The results of the simulations show that the resolution of the backend can be increased to 5 bits both in terms of power and other static and dynamic performance parameters.
246

Measurements of Optical Penetration Depth in Smoked Salmon

Johansen, Remi Andre Ursin January 2008 (has links)
Optical spectroscopy is a common method used in the determination of quality parameters in groceries. An optical characterization of smoked Atlantic salmon was carried out in this thesis. The optical penetration depth in salmon was found at 531 nm and 632 nm from measurements with lasers as light sources, and from 550 nm to 880 nm with a tungsten halogen lamp as light source. The spectrum of the halogen lamp combined with the absorption spectrum of the salmon made it difficult to obtain results for wavelengths below 550 nm with the halogen lamp. Two variations in the measurements on smoked salmon were performed; measuring on needle insertion versus needle extraction and measuring across several layers of muscle tissue versus measuring along one layer of muscle tissue in salmon. The absorption coefficient and the reduced scattering coefficient of smoked Atlantic salmon was calculated. Significant differences were found dependent on needle insertion or needle extraction, and whether the measurements were made along one layer or across several layers. The penetration depths were found to be 6.79±0.33 mm across several layers and 10.76±1.03 mm along one layer in the measurements with the He-Ne laser. The diffusion approximation was found to be a good approximation for wavelengths from 600 nm to 700 nm. With further development, it may be possible to determine the astaxanthin content of salmon with the method used in this thesis.
247

Reduction of speckle contrast in HDTV laser projection display.

Apeland, Knut Øyvind January 2008 (has links)
Abstract In this thesis the focus has been on laser speckle. It is done in collaboration with poLight. They are developing a projector, where laser light is the source of illumination. In such projectors, laser speckle degrades the image quality. The aim of this project is to construct a speckle reduction device to be used in the laser projector. The theory covers a description of laser speckle, how to reduce the speckle contrast, and five methods to so. We explain why speckle arises and which parameters we can manipulate to reduce the speckle contrast. The five speckle reduction methods included in this thesis are; vibrating diffuser, slowly moving diffuser, Hadamard matrices, scattering tube, and vibrating mirror. Large vibrational motions are unwanted, considering the size of the device, generation of noise, and problems with alignment of the optical components in the projector that this would lead to. The quality of the laser beam is prominent in order to produce a sharp image, thus the use of diffusers with large scattering angles is not a good solution. The scattering tubes, designed by poLight, are tubes filled with micro pearls in a polymer gel. The size of the pearls decides the nature of the scattering. Larger pearls will give less back scattering and more light transmitted in the forward direction. If the tubes are rotated in a well balanced device we can avoid generating vibrations. The Hadamard matrices is the only one of the five methods which is not based on a motion. The challenge is to find a SLM to implement the matrices. It requires a low response time in order to present enough matrices during the exposure time of the eye. The laboratory setup we use to measure the speckle contrast is an improved version of the setup constructed in the specialisation project. A screen was removed from the old setup, and the speckle is now imaged directly from the speckle reduction device. The measured speckle reduction is thus due to the device alone, and not affected by the screen. The results were reproducible and in agreement with what we expected. We implemented a vibrating diffuser, both the single and the slowly moving. A piece cut from a plastic bag and some Scotch Magic tape were used as diffusers. The tape is the strongest diffuser and gives the lowest speckle contrast, however, it also has the largest scattering angle. The single tape diffuser reduced the speckle contrast to $C = 0.112$. With two tape difusers in series the intensity in the images becomes too low to exploit the dynamic range of the CCD sensor. The result is a higher calcualted speckle contrast with two diffusers, $C=0.131$, even though it ought to be smaller. We tested five prototypes of the scattering tube with different concentrations. The tube with the highest concentration has the highest speckle reduction abilities. It also has the strongest scattering effect. The scattering is less than with the tape diffuser, and so is the speckle reduction. The speckle contrast is reduced to $C=0.320$ when the tube is rotated, and to $C=0.389$ when it is vibrated. The tubes was also tested in series with a ground glass. The ground glass acted as a second diffuser. In this setting, vibration and rotation of the tubes reduced the speckle contrast equally, $C approx 0.283$ From the measured speckle contrast of the diffusers and tubes in stationary conditions, a polarization analysis should show a depolarization of the laser beam. This were the case only for the plastic diffuser. It is assumed that the error lays with the polarization analysis. There should be a depolarization in the tape and a partial depolarization in the tubes. A calculation of the speckle size was performed as well. Based on the theory we expected the size of the speckle grains to be $sigma_s = 37.77~mu m$. From the Fourier analysis of a speckle image from the setup we calculated the speckle size to be $sigma_s = 5.35$~mm, which is approximately 140 times bigger. The expected speckle size is too small, because we did not take into account a small magnification in the setup. The Fourier analysis of discrete and limited sets of data points is probably the main explanation of the difference, but a more thorough study is needed.
248

Sensor Array Signal Processing for Source Localization

Manzano García-Muñoz, Cristina January 2008 (has links)
This work is a study about source localization methods, more precisely, about beamforming approaches. The necessary background theory is provided first, and then, further developed to explain the basis of each approach. The studied problem consists in an array of sensors in which the signal to process is impinging. Several examples of inciding signals are provided in order to compare the performance of the methods. The goal of the approaches is to find the Incident Signal Power and the Direction Of Arrival of the Signal (or Signals) Of Interest. With these information, the source can be located in angle and range. After the study, the conclusions will show which methods to chose depending on the application pursued. Finally, some ideas or guidelines about future investigation on the field, will be given.
249

Objective Image Quality Metrics for Ultrasound Imaging

Simpson, Cecilie Øinæs January 2009 (has links)
Objective evaluation of the image quality on ultrasound images is a comprehensive task due to the relatively low image quality compared to other imaging techniques. It is desirable to objectively determine the quality of ultrasound images since quantification of the quality removes the subjective evaluation which can lead to varying results. The scanner will also be more user friendly if the user is given feedback on the quality of the current image. This thesis has investigated in the objective evaluation of image quality in phantom images. It has been emphasized on the parameter spatial variance which is incorporated in the image analysis system developed during the project assignment. The spatial variance was tested for a variety of settings as for instance different beam densities and number of MLAs. In addition, different power spectra have been evaluated related to the ProbeContact algorithm developed by the Department of Circulation and Medical Imaging (ISB). The algorithm has also been incorporated in the image analysis system. The results show that the developed algorithm gives a good indication of the spatial variance. An image gets more and more spatially variant as the beam density decreases. If the beam density goes below the Nyquist sampling limit, the point target will appear to move more slowly when passing a beam since the region between the two beams are undersampled. This effect will be seen in the correlation coefficient plots which is used as a measure of spatial variance. The results from the calculations related to the ProbeContact algorithm show that rearranging the order of the averaging and the Fourier transformation will have an impact on the calculated probe contact, but the differences are tolerable. All the evaluated methods can be used, but performing Fourier transform before averaging can be viewed as the best solution since it gives a lateral power spectrum with low variance and a smooth mean frequency and bandwidth when they are compared for several frames. This is suggested with the reservations of that basic settings are used. Performing 1D (in the lateral direction) or 2D Fourier transform before averaging will not have any impact of the resulting power spectrum as long as normalized Fourier tranform is used. The conclusion is that the image analysis system, including the spatial variance parameter, is a good tool for evaluating various parameters related to image quality. The system is improved by the ProbeContact algorithm which gives a good indication of the image quality based on the acoustic contact of the probe. Even though the image analysis system is limited to phantom images, the thesis is a starting point in the process of obtaining objective evaluation of the image quality in clinical images since others may use it as a basis for their work.
250

Real-Time JPEG2000 Video Decoding on General-Purpose Computer Hardware

Halsteinli, Erlend January 2009 (has links)
There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.

Page generated in 0.0424 seconds