Spelling suggestions: "subject:"sie"" "subject:"sien""
121 |
Application of UWB Technology for Positioning , a Feasibility SudyCanovic, Senad January 2007 (has links)
<p>Ultra wideband (UWB) signaling and its usability in positioning schemes has been discussed in this report. A description of UWB technology has been provided with a view on both the advantages and disadvantages involved. The main focus has been on Impulse Radio UWB (IR-UWB) since this is the most common way of emitting UWB signals. IR-UWB operates at a very large bandwidth at a low power. This is based on a technique that consists of emitting very short pulses (in the order of nanoseconds) at a very high rate. The result is low power consumption at the transmitter but an increased complexity at the receiver. The transmitter is based on the so-called Time Hopping UWB (TH-UWB) scheme while the receiver is a RAKE receiver with five branches. IR-UWB also provides good multipath properties, secure transmission, and accurate positioning whith the latter being the main focus of this report. Four positioning methods are presented with a view on finding which is the most suitable for UWB signaling. Received Signal Strength (RSS), Angle Of Arrival (AOA), Time Of Arrival (TOA) and Time Difference Of Arrival (TDOA) are all considered, and TDOA is found to be the most appropriate. Increasing the SNR or the effective bandwidth increases the accuracy of the time based positioning schemes. TDOA thus exploits the large bandwidth of UWB signals to achieve more accurate positioning in addition to synchronization advantages over TOA. The TDOA positioning scheme is tested under realistic conditions and the results are provided. A sensor network is simulated based on indications provided by WesternGeco. Each sensor consists of a transmitter and receiver which generate and receive signals transmitted over a channel modeled after the IEEE 802.15.SG3 channel model. It is shown that the transmitter power and sampling frequency, the distance between the nodes and the position of the target node all influence the accuracy of the positioning scheme. For a common sampling frequency of 55 GHz, power levels of -10 dBm, -7.5 dBm and -5 dBm are needed in order to achieve satisfactory positioning at distances of 8, 12, and 15 meters respectively. The need for choosing appropriate reference nodes for the cases when the target node is selected on the edges of the network is also pointed out.</p>
|
122 |
Vectorized 128-bit Input FP16/FP32/FP64 Floating-Point MultiplierStenersen, Espen January 2008 (has links)
<p>3D graphic accelerators are often limited by their floating-point performance. A Graphic Processing Unit (GPU) has several specialized floating-point units to achieve high throughput and performance. The floating-point units consume a large part of total area, and power consumption, and hence architectural choices are important to evaluate when implementing the design. GPUs are specially tuned for performing a set of operations on large sets of data. The task of a 3D graphic solution is to render a image or a scene. The scene contains geometric primitives as well as descriptions of the light, the way each object reflects light and the viewer position and orientation. This thesis evaluates four different pipelined, vectorized floating-point multipliers, supporting 16-bit, 32-bit and 64-bit floating-point numbers. The architectures are compared concerning area usage, power consumption and performance. Two of the architectures are implemented at Register Transfer Level (RTL), tested and synthesized, to see if assumptions made in the estimation methodologies are accurate enough to select the best architecture to implement given a set of architectures and constraints. The first architecture trades area for lower power consumption with a throughput of 38.4 Gbit/s at 300 MHz clock frequency, and the second architecture trades power for smaller area with equal throughput. The two architectures are synthesized at 200 MHz, 300 MHz and 400 MHz clock frequency, in a 65 nm low-power standard cell library and a 90 nm general purpose library, and for different input data format distributions, to compare area and power results at different clock frequencies, input data distributions and target technology. Architecture one has lower power consumption than architecture two at all clock frequencies and input data format distributions. At 300 MHz, architecture one has a total power consumption of 1.9210 mW at 65 nm, and 15.4090 mW at 90 nm. Architecture two has a total power consumption of 7.3569 mW at 65 nm, and 17.4640 mW at 90 nm. Architecture two requires less area than architecture one at all clock frequencies. At 300 MHz, architecture one has a total area of 59816.4414 um^2 at 65 nm, and 116362.0625 um^2 at 90 nm. Architecture two has a total area of 50843.0 um^2 at 65 nm, and 95242.0469 um^2 at 90 nm.</p>
|
123 |
Delay-Fault BIST in Low-Power CMOS DevicesLeistad, Tor Erik January 2008 (has links)
<p>Devices such as microcontrollers are often required to operate across a wide range of voltage and temperature. Delay variation in different temperature and voltage corners can be large, and for deep submicron geometries delay faults are more likely than for larger geometries. This has made delay fault testing necessary. Scan testing is widely used as a method for testing, but it is slow due to time spent on shifting test vectors and responses, and it also needs modification to support delay testing. This assignment is divided into three parts. The first part investigates some of the effects in deep submicron technologies, then it looks at different fault models, and at last different techniques for delay testing and BIST approaches are investigated. The second part suggests a design for a test chip, including a circuit under test (CUT) and BIST logic. The final part investigates how the selected BIST logic can be used to reduce test time and what considerations needs to be made to get a optimal solution. The suggested design is a co-processor with SPI slave interface. Since scan based testing is commonly used today, STUMPS was selected as the BIST solution to use. Assuming that scan already is used, STUMPS will have little impact on the performance of the CUT since it is based on scan testing. During analysis it was found that several aspects of the CUT design affects the maximum obtainable delay fault coverage. It was also found that careful design of the BIST logic is necessary to get the best fault coverage and a solution that will reduce the overall cost. The results shows that a large amount of time can be saved during test by using BIST, but since the area of the circuit increases due to the BIST logic it necessarily dont mean that one will reduce cost on the overall design. Whether or not a BIST solution will result in reduced cost will depend on the complexity of the circuit that is tested, how well the BIST logic fits this circuit, how many internal scan chains can be used, and how fast scan vectors can be applied under BIST. In this case it looks like the BIST logic is not well suited to detect the random hard to detect faults. This results in a large amount of top up patterns. This combined with the large area of the BIST logic makes it unlikely that BIST will reduce cost of this design.</p>
|
124 |
Power Allocation In Cognitive RadioCanto Nieto, Ramon, Colmenar Ortega, Diego January 2008 (has links)
<p>One of the major challenges in design of wireless networks is the use of the frequency spectrum. Numerous studies on spectrum utilization show that 70% of the allocated spectrum is in fact not utilized. This guides researchers to think about better ways for using the spectrum, giving rise to the concept of Cognitive Radio (CR). Maybe one of the main goals when designing a CR system is to achieve the best way of deciding when a user should be active and when not. In this thesis, the performance of Binary Power Allocation protocol is deeply analyzed under different conditions for a defined network. The main metric used is probability of outage, studying the behavior of the system for a wide range of values for different transmission parameters such as rate, outage probability constraints, protection radius, power ratio and maximum transmission power. All the studies will be performed with a network in which we have only one Primary User for each cell, communicating with a Base Station. This user will share this cell with N potential secondary users, randomly distributed in space, communicating with their respective secondary receivers, from which only M will be allowed to transmit according to the Binary Control Power protocol. In order to widely analyze the system and guide the reader to a better comprehension of its behavior, different considerations are taken. Firstly an ideal model with no error in the channel information acquisition and random switching off of the user is presented. Secondly, we will try to improve the behavior of the system by developing some different methods in the decision of dropping a user when it is resulting harmful for the primary user communication. Besides this, more realistic approaches of the channel state information are performed, including Log-normal and Gaussian error distributions. Methods and modifications used to reach the obtained analytical results are presented in detail, and these results are followed by simulation performances. Some results that do not accord with theoretical expectations are also presented and commented, in order to open further ways of developing and researching.</p>
|
125 |
MPEG Transcoder for Xilinx SpartanKrohn, Jørgen, Linnerud, Jørgen January 2008 (has links)
<p>In this project the focus has been on developing an MPEG transcoder that can be used as a demonstration module for the AHEAD system, Ambient Hardware: Embedded Architecture on Demand. AHEAD is a collaboration project between NTNU and SINTEF in Trondheim that is aiming to develop a method of doing run-time reconfiguration of hardware. The AHEAD system will in the future use an FPGA in a tag that is able to reconfigure itself with hardware description that it receives from a hand-held device, e.g. a PDA, or downloads from the Internet. The tag will then be able to be operating as a co-processor for hand-held devices in the vicinity of the tag. Consequently, since the hand-held devices avoid doing some of the heavy processing of the video stream, the power consumption in the hand-held device will be decreased. The MPEG transcoder in this report consists of two parts, an MPEG-4 decoder and an MPEG-2 encoder, that are connected and form a complete transcoder. The MPEG-4 decoder was designed in software in the pre-project to this Master thesis and was in this Master thesis designed in hardware. The MPEG-2 encoder was partially designed by the former students Rognerud and Rustad, but was not working as required and had to be modified to a large extent. In this project the MPEG-4 decoder has been designed from scratch, and the MPEG-2 encoder has been modified in such a way that it operates as specified in the MPEG-2 standard. The first part that was designed was the MPEG-4 decoder. This was due to the experience on that part from the pre-project and that it is the first part of the transcoder. Also, it was useful to produce input data to the encoder. Secondly, the MPEG-2 encoder was modified to operate as required. However, the amount of time spent on finding the errors and resolve them in this part was larger than assumed in the beginning of the project. There was found a way to downscale the resolution of a video in the frequency domain and thus, the Inverse Discrete Cosine Transform, IDCT, and Discrete Cosine Transform, DCT, modules were not needed in the design of the MPEG transcoder. However, the resolution scaler has not been designed in this project, but should be a part of the MPEG transcoder in the future. This should be done to further decrease the power consumption in the hand-held device. In other words, the resolution scaler would be a very important module of the MPEG transcoder and should be implemented in the future MPEG transcoder to make it more beneficial for use in the AHEAD system. During testing and verification, both the MPEG-4 decoder and the MPEG-2 encoder were found to be functioning as specified by the MPEG standards. A video was decoded from MPEG-4, transcoded to MPEG-2 and recognized as an MPEG-2 video that could be displayed in several media players showing good video quality. The results from the synthesis show that the complete MPEG transcoder would use 84% of the available resources on the FPGA that is available for experimental purposes in this project. Also, it shows that the designed MPEG transcoder could operate on a clock frequency of 54 MHz. This results in an MPEG transcoder that is capable of transcoding videos of at least full DVD quality, 720 x 576 pixels, at run-time, which is thought to be sufficient for most cases in AHEAD. Additionally, the transcoder would for most cases be able to transcode HD video of 1280 x 720 resolution, however this is depending on the degree of compression and the nature of the incoming MPEG-4 video. It is concluded in this Master thesis that it has been designed, tested and verified an MPEG transcoder that transcodes MPEG-4 video to MPEG-2 video. The MPEG transcoder is capable of handling at least DVD quality video, which should be sufficient for most cases in AHEAD. There has not been focused on incorporating the transcoded video in a transport stream at run-time in this project. However it is recommended to do so in a future transcoder system and the interface of the MPEG transcoder in this project has been described to make this easier. Also, an article explaining a method for doing resolution scaling in the frequency domain has been proposed. It has further been concluded that the MPEG transcoder designed in this project is a huge step toward having an MPEG transcoding system that can operate in the future AHEAD system. Additionally, it has been experienced that reusing other designers modules sometimes can be less convenient since the increased amount of time spent on debugging can exceed the extra time spent on designing it from scratch. This is because the self designed modules tend to be easier to debug.</p>
|
126 |
Low power/high performance dynamic reconfigurable filter-designBystrøm, Vebjørn January 2008 (has links)
<p>The main idea behind this thesis was to optimize the multipliers in a finite impulse response (FIR) filter. The project was chosen because digital filters are very common in digital signal processing and is an exciting area to work with. The first part of the text describes some theory behind the digital filter and how to optimize the multipliers that are a part of digital filters. The substantial thing to emphasize here is the use of Canonical Signed Digits (CSD) encoding. CSD representation for FIR filters can reduce the delay and complexity of the hardware implementation. CSD-encoding reduces the amount of non-zero digits and will by this reduce the multiplication process to a few additions/subtractions and shifts. In this thesis it was designed 4 versions of the same filter, that was implemented on an FPGA, where the substantial and most interesting results were the differences between coefficients that was CSD-encoded and coefficients that was represented with 2's complement. It was shown that the filter version that had CSD-encoded coefficients used almost 20% less area then the filter version with 2's complement coefficients. The CSD-encoded filter could run on a maximum frequency of 504,032 MHz compared the other filter that could run on a maximum frequency of 249,123 MHz. One of the filters that was designed was designed using the * operator in VHDL, that proved to be the most efficient when it came to the use of number of slices and speed. The reason for this was because an FPGA has built-in multipliers, so if one has the opportunity to use the multiplier they will give the best result instead of using logic blocks on the FPGA It was also discussed a filter that has the ability to change the coefficients at run-time without starting the design from the beginning. This is an advantage because a constant coefficient multiplier requires the FPGA to be reconfigured and the whole design cycle to be re-implemented. The drawback with the dynamic multiplier is that is uses more hardware resources.</p>
|
127 |
A Pragmatic Approach to Modulation Scaling Based Power Saving for Maximum Communication Path Lifetime in Wireless Sensor NetworksMalavia Marín, Raúl January 2008 (has links)
<p>The interest in Wireless Sensor Networks is rapidly increasing due to their interesting advantages related to cost, coverage and network deployment. They are present in civil applications and in most scenarios depend upon the batteries which are the exclusive power source for the tiny sensor nodes. The energy consumption is an important issue for research, and many interesting projects have been developed in several areas. They focus on topology topics, Medium Access Control or physical issues. Many projects aim at the physical layer where the node's power consumption is optimized through scaling the modulation scheme used in node communications. Results show that an optimal modulation scheme can lead to the minimum power consumption over the whole wireless sensor network. A usual simplification in research is to target individual paths and not take into account the whole network. However nodes may be part of several paths, and therefore nodes closer to the sinks may consume higher amounts of energy. This fact is the chief motivation of our research, where modulation scaling over the nodes with more energy is performed in order to increase the lifetime of the nodes having lower energy reserves. Simulation results showed typical values of path lifetime expectancy of 50 to 120 percent higher than comparable power-aware methods.</p>
|
128 |
Performance of a Multichannel Audio Correction System Outside the Sweetspot. : Further Investigations of the Trinnov Optimizer.Wille, Joachim Olsen January 2008 (has links)
<p>This report is a continuation of the student project "Evaluation of TrinnovOptimizer audio reproduction system". It will further investigate theproperties and function of the Trinnov Optimizer, a correction system foraudio reproduction systems. During the student project measurements wereperformed in an anechoic lab to provide information on the functionality andabilities of the Trinnov Optimizer. Massive amounts of data were recorded,and that has also been the foundation of this report. The new work that hasbeen done is by interpreting these results through the use of Matlab. The Optimizer by Trinnov [9 ] is a standalone system for reproductionof audio over a single or multiple loudspeaker setup. It is designed tocorrect frequency and phase response in addition to correcting loudspeakerplacements and cancel simple early re?ections in a multiple loudspeakersetup. The purpose of further investigating this issue was to understandmore about the sound?eld produced around the listening position, and togive more detailed results on the changes in the sound?eld after correction.Importance of correcting the system not only in the listening position, butalso in the surrounding area, is obvious because there is often more than onelistener. This report gives further insight in physical measurements ratherthan subjective statements, on the performance of a room and loudspeakercorrection device. WinMLS has been used to measure the system with single, and multiplemicrophone setups. Some results from the earlier student project are alsoin this report to verify measurement methods, and to show correspondancebetween the di?erent measuring systems. Therefore some of the data havebeen compared to the Trinnov Optimizer's own measurements and appear similar in this report. Some errors found in the initial report, the results from the phase response measurements, have also been corrected. Multiple loudspeakers in a 5.0 setup have been measured with 5 microphones on a rotating boom to measure the soundpressure over an area around the listening position. This allowed the e?ect of simple re?ections cancellation, and the ability to generate virtual sources to be investigated. For the speci?c cases that were investigated in this report, the Optimizer showed the following: ? Frequency and phase response will in every situation be optimized to the extent of the Optimizers algorithms. ? Every case shows improvement in the frequency and phase response over the whole measured area. ? Direct frontal re?ections was deconvolved up to 300Hz over the whole measured area with a radius of 56cm. ? A re?ection from the side was deconvolved roughly up to 200Hz for microphones 1 through 3, up to a radius of 31.25cm, and up to 100Hz for microphones 4 and 5. ? The ability to create virtual sources corresponds fairly to the theoretical expectations. The video sequences that were developed give an interesting new angle on the problems that were investigated. Other than looking at plots of di?erent angles which is di?cult and time consuming, the videos showed an intuitive perspective that enlightened the same issues as the common presented data of frequency and phase response measurements.</p>
|
129 |
Ultra-Wideband Sensor-CommunicationAmat Pascual, Ángel José January 2008 (has links)
<p>One of the fundamentals concerns in wireless communications with battery operated terminals is the battery life. Basically there are two ways of reducing power consumption: the algorithms should be simple and efficiently implemented (at least in the wireless terminals), and the transmit power should be limited. In this document is considered discrete time, progressive signal transmission with feedback [ramstad]. For forward Gaussian channel, with an ideal feedback channel, the system performs according to OPTA (Optimal Performance Theoretically Attainable[berger]). In this case, with substantial bandwidth expansion through multiple retransmissions, the power can be lowered to a theoretical minimum. In the case of a non-ideal return channel the results are limited by the feedback channel's signal-to-noise ratio. Going one step forward, a more realistic view of the channel will consider fading due to multiple reflections, especially in indoors scenarios. In this thesis it is discussed how to model the channel fading and how to simulate it from different probability distributions. After, some solutions to avoid, or at least reduce, all the undesirable effects caused by the fading will be proposed. In these solutions, the fading characteristics (power and dynamic range) and the application requirements will play a vary important role in the final system design. Finally, a realistic signal will be tried to be sent in a realistic scenario. This will be audio transmission over fading channels. Then, the results will be compared in general terms to a similar equipment such as generic wireless microphone system.</p>
|
130 |
Optimisation of a Pipeline ADC by using a low power, high resolution Flash ADC as backend.Høye, Dag Sverre January 2008 (has links)
<p>Flash ADCs with resolutions from 3 to 5 bits have been implemented on a transistor level. These ADCs are to be incorporated as the backend of a higher resolution Pipeline ADC. The motivation for this work has been to see how much the resolution of this backend can be increased before the power consumption becomes to high. This is beneficial in Pipeline ADCs because the number of Pipeline stages is reduced so that the throughput delay of the Pipeline ADC is also reduced. All the Flash ADCs are implemented with the same Capacitive Interpolation-technique. This technique was found to have several benificial properties as opposed to other power saving techniques applied to Flash ADCs in a project assignment done prior to this thesis. The results of the simulations show that the resolution of the backend can be increased to 5 bits both in terms of power and other static and dynamic performance parameters.</p>
|
Page generated in 0.0489 seconds