• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 254
  • 158
  • Tagged with
  • 412
  • 412
  • 412
  • 378
  • 214
  • 214
  • 99
  • 85
  • 56
  • 43
  • 17
  • 16
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Multiple Sensor Data Analysis, Fusion, and Communication for ULTRASPONDER

Gutiérrez Perera, Carlos Sergio January 2009 (has links)
<p>This Thesis covers a part of the study comprised in the ULTRASPONDER (In vivo Ultrasonic Transponder System for Biomedical Applications) Project. The main area of interest is to study how to combine different signals which can imply an improvement onto the diagnostic information carried by the ECG records. It is believed that monitoring blood pressure inside the heart may give vital information to correctly diagnose and provide treatment for chronic heart failure patients. Moreover, heart rate variability analysis has proved to be one of the most important risk predictors in detecting ventricular tachycardias and fluttering. With this focus, the Thesis provides a solid background on cardiac anatomy and physiology, uncommon in many engineering texts, in order to understand the biological changes that affect the waveforms, for then moving to the performance of a theoretical and statistical study in order to find correlations, redundancies, or new information content in the signals intended to cohabit in the ULTRASPONDER control unit, namely, signals from the intra-cavity pressure sensors, ECG electrodes and other type of sensors, as well as heart rate time-series. Because this control unit, implanted underneath the patient's skin, must handle several different signals and transmit clinically relevant information in a power constraint manner to an external device, which may have much larger amount of resources, all signal processing performed in the context of the control unit must be kept under a reasonable limit that permits to efficiently extract information about the patient's health without decreasing the device's lifetime. We have implemented two time-domain QRS complexes detection systems, two simple beat classification algorithms based on beat-to-beat segmentation and template correlation, and some HRV measures as fundamental elements of ECG signal processing. Detection performance is analyzed from a critical point of view, considering several not so common parameters, such as Qalpha and MCC, which collect much more information than the usual sensitivity and predictivity assessments. A closed-loop DPCM system was chosen for the encoding and compression tasks, experiments showing its validity for ECG and blood pressure signals, although advising against its usage for HR time series. Compression performance is analyzed in terms of compression ratio attained and distorsion introduced. A novel measure called "compressibility quotient" (CQ) is presented as an indicator of the balance between theoretical compression limits marked by the sample entropy and actual compression obtained with a concrete scheme, in terms of the tradeoff CR-distorsion. A strong correlation between signal-to-noise ratio and CQ was found, implying that this measure might have some relevance for analyzing real compression possibilities under some quality criteria. The approaches followed in this Thesis, particularly regarding the theoretical study and data fusion comments, are valid for the ECG, blood pressure and heart rate signals considered, without detriment to be likewise applied to new signals that might become of interest in the future years. When new sensors are implemented to provide distinct signals, a theoretical study can include them to find out their usefulness and relation to the ones already considered. Data fusion should then be reviewed to assess the validity and convenience of the communication system for the new set of significant signals.</p>
212

Dynamic Bias for RF Class A Power Amplifiers

Caharija, Walter January 2009 (has links)
<p>This thesis focuses on class A radio frequency power amplifiers in dynamic supply modulation architectures (dynamic bias). These are promising efficiency enhancement techniques where the device is driven harder by varying its bias signals. Non linearities that arise are considered as digitally compensated through, for example, digital predistortion (DPD). Bias signals are meant as functions of the PA?s output power level (P out). Therefore, the input power level (P in) as well as the feeding signals are thought as quantities the amplifier need to give a certain P out. The selected set of bias points the device sweeps through is called bias trajectory or bias path. A tool to find a suitable bias trajectory is developed considering the requirements a class A power amplifier should satisfy: high power added efficiency, acceptable gain and output phase variations as P out changes (allowing a DPD algorithm to be effective), low harmonic distortion and not too complicated bias signals patterns. The tool consists of two softwares: ADS and Matlab. ADS simulates the device under test while Matlab allows the user to analyze the data and find a suitable bias path. Once a trajectory is identified, ADS can sweep along it and give more information on linearity and efficiency through, for instance, 2-tone harmonic balance simulations. Note that only static characteristics are evaluated and memory effects disregarded. The path searching algorithm is then applied to a HBT transistor, at a frequency of 1.9GHz and to a complete pHEMT class A PA (frequency of 6Ghz). In both cases a suitable trajectory is identified and analyzed back in ADS. The Matlab plots are qualitatively similar to each other when switching form one device to another. The HBT transistor has then been tested in the laboratory and static measurements have been performed showing good agreement with simulations. Keywords: Bias trajectory, dynamic bias, efficiency, HBT, linearity, pHEMT, power amplifier</p>
213

Suppression of Radar Echoes produced below the Liquid Surface close to the Base of a Storage Container for LNG

Andersen, Arne Helge January 2007 (has links)
Bunn absorbent ble designet til å matche overliggende mediet.
214

Study of a 145 MHz Tranceiver

Birkeland, Roger January 2007 (has links)
After the planning phase autumn 2006, the work with the student satellite project evolved into sub-system design and prototyping. The work presented in this report considers a proposal for a VHF radio system intended for a small student satellite. The design process started on scratch, not looking much at earlier ncube designs, almost no documentation is to be found about actual construction and final measurements. Three design concepts where developed, one featuring an integrated transceiver, one as a self-designed FSK radio and the last one uses a GMSK-modem to solve modulation and de-modulation issues. As the design was chosen and the work of selecting components commenced, it became clear the chosen design would become not unlike the receiver proposed for ncube. The reason for this is component availability, especially the SA606 IF-sub-system and the GMSK-modem. During test and measurement, a few issues were discovered. The proposed low noise amplifiers seems to be a dead end for this frequencies, and alternatives must be found. The layout for the SA606 is improved and seems to function as required. Since the chosen layout is quite similar to the previous ncube 145 MHz receiver, it shows that the components selected for this designs are a good solution. However, the design is so extensive more work is required before a prototype is ready. It can be questioned if the first design proposal would have been less extensive and could have lead to a finished prototype withing the assigned time frame. Anyway, link budgets and power estimates shows that it is possible to build such a system within the defined limits.
215

Compensation of Loudspeaker Nonlinearities : - DSP implementation

Øyen, Karsten January 2007 (has links)
Compensation of loudspeaker nonlinearities is investigated. A compensation system, based a loudspeaker model (a computer simulation of the real loudspeaker), is first simulated in matlab and later implemented on DSP for realtime testing. So far it is a pure feedforward system, meaning that no feedback measurement of the loudspeaker is used. Loudspeaker parameters are drifting due to temperature and aging. This reduces the performance of the compensation. To fulfil the system, an online tracking of the loudspeaker linear parameters is needed (also known as parameter identification). Previous investigations (done by Andrew Bright and also Bo R. Pedersen) shows that the loudspeaker linear parameters can be found by calculations based on measurements of the loudspeakers current. This is a subject for further work. Without the parameter identification, the compensation system is briefly tested, with the loudspeaker diaphragm excursion as output measure. The loudspeaker output and the output of the loudspeaker model are both monitored, and the loudspeaker model is manually adjusted to fit the real loudspeaker. This is done by realtime tuning on DSP. The system seems to work for some input frequencies and do not work for others.
216

Application of UWB Technology for Positioning , a Feasibility Sudy

Canovic, Senad January 2007 (has links)
Ultra wideband (UWB) signaling and its usability in positioning schemes has been discussed in this report. A description of UWB technology has been provided with a view on both the advantages and disadvantages involved. The main focus has been on Impulse Radio UWB (IR-UWB) since this is the most common way of emitting UWB signals. IR-UWB operates at a very large bandwidth at a low power. This is based on a technique that consists of emitting very short pulses (in the order of nanoseconds) at a very high rate. The result is low power consumption at the transmitter but an increased complexity at the receiver. The transmitter is based on the so-called Time Hopping UWB (TH-UWB) scheme while the receiver is a RAKE receiver with five branches. IR-UWB also provides good multipath properties, secure transmission, and accurate positioning whith the latter being the main focus of this report. Four positioning methods are presented with a view on finding which is the most suitable for UWB signaling. Received Signal Strength (RSS), Angle Of Arrival (AOA), Time Of Arrival (TOA) and Time Difference Of Arrival (TDOA) are all considered, and TDOA is found to be the most appropriate. Increasing the SNR or the effective bandwidth increases the accuracy of the time based positioning schemes. TDOA thus exploits the large bandwidth of UWB signals to achieve more accurate positioning in addition to synchronization advantages over TOA. The TDOA positioning scheme is tested under realistic conditions and the results are provided. A sensor network is simulated based on indications provided by WesternGeco. Each sensor consists of a transmitter and receiver which generate and receive signals transmitted over a channel modeled after the IEEE 802.15.SG3 channel model. It is shown that the transmitter power and sampling frequency, the distance between the nodes and the position of the target node all influence the accuracy of the positioning scheme. For a common sampling frequency of 55 GHz, power levels of -10 dBm, -7.5 dBm and -5 dBm are needed in order to achieve satisfactory positioning at distances of 8, 12, and 15 meters respectively. The need for choosing appropriate reference nodes for the cases when the target node is selected on the edges of the network is also pointed out.
217

Diffusion-Based Model for Noise-Induced Hearing Loss

Aas, Sverre, Tronstad, Tron Vedul January 2007 (has links)
Among several different damaging mechanisms, oxidative stress is found to play an important role in noise-induced hearing loss (NIHL). This is supported by both findings of oxidative damage after noise exposure, and the fact that upregulation of antioxidant defenses seem to reduce the ears susceptibility to noise. Oxidative stress mechanisms could help explain several of the characteristics of NIHL, and we therefore believe that it would be advantageous to estimate noise-induced hearing impairment on the basis of these, rather than the prevailing energy based methods. In this thesis we have tried to model progress of NIHL using diffusion principles, under the assumption that accumulation of reactive oxygen species (ROS) is the cause of hearing impairment. Production, and the subsequent accumulation, of ROS in a group of outer hair cells (OHCs) is assessed by different implementations of sound pressure as in-parameter, and the ROS concentration is used in estimation of noise-induced threshold shift. The amount of stress experienced by the ear is implemented as a summation of ROS concentration with different exponents of power. Measured asymptotic threshold shift (ATS) values are used as a calibrator for the development of threshold shifts. Additionally the results are evaluated in comparison to the standards developed by the International Organization for Standardization (ISO) and the American Occupational Safety and Health Administration (OSHA). Results indicate that ROS production is not directly proportional to the sound pressure, rather anaccelerated formation and accumulation for increasing sound pressure levels (SPLs). Indications are also that the correlation between concentration of ROS and either temporary threshold shift (TTS) and/or permanent threshold shift (PTS) is more complex than our assumption. Because our model is based on diffusion principles we get the same tendency of noise-induced hearing loss development as experimentally measured TTS development. It also takes into account the potentially damaging mechanisms which occur during recovery after exposure, and has the ability to use TTS data for calibration. We therefore suggest that modeling of ROS accumulation in the hair cells could be used advantageously to estimate noise-induced hearing loss. / .
218

Vectorized 128-bit Input FP16/FP32/FP64 Floating-Point Multiplier

Stenersen, Espen January 2008 (has links)
3D graphic accelerators are often limited by their floating-point performance. A Graphic Processing Unit (GPU) has several specialized floating-point units to achieve high throughput and performance. The floating-point units consume a large part of total area, and power consumption, and hence architectural choices are important to evaluate when implementing the design. GPUs are specially tuned for performing a set of operations on large sets of data. The task of a 3D graphic solution is to render a image or a scene. The scene contains geometric primitives as well as descriptions of the light, the way each object reflects light and the viewer position and orientation. This thesis evaluates four different pipelined, vectorized floating-point multipliers, supporting 16-bit, 32-bit and 64-bit floating-point numbers. The architectures are compared concerning area usage, power consumption and performance. Two of the architectures are implemented at Register Transfer Level (RTL), tested and synthesized, to see if assumptions made in the estimation methodologies are accurate enough to select the best architecture to implement given a set of architectures and constraints. The first architecture trades area for lower power consumption with a throughput of 38.4 Gbit/s at 300 MHz clock frequency, and the second architecture trades power for smaller area with equal throughput. The two architectures are synthesized at 200 MHz, 300 MHz and 400 MHz clock frequency, in a 65 nm low-power standard cell library and a 90 nm general purpose library, and for different input data format distributions, to compare area and power results at different clock frequencies, input data distributions and target technology. Architecture one has lower power consumption than architecture two at all clock frequencies and input data format distributions. At 300 MHz, architecture one has a total power consumption of 1.9210 mW at 65 nm, and 15.4090 mW at 90 nm. Architecture two has a total power consumption of 7.3569 mW at 65 nm, and 17.4640 mW at 90 nm. Architecture two requires less area than architecture one at all clock frequencies. At 300 MHz, architecture one has a total area of 59816.4414 um^2 at 65 nm, and 116362.0625 um^2 at 90 nm. Architecture two has a total area of 50843.0 um^2 at 65 nm, and 95242.0469 um^2 at 90 nm.
219

Delay-Fault BIST in Low-Power CMOS Devices

Leistad, Tor Erik January 2008 (has links)
Devices such as microcontrollers are often required to operate across a wide range of voltage and temperature. Delay variation in different temperature and voltage corners can be large, and for deep submicron geometries delay faults are more likely than for larger geometries. This has made delay fault testing necessary. Scan testing is widely used as a method for testing, but it is slow due to time spent on shifting test vectors and responses, and it also needs modification to support delay testing. This assignment is divided into three parts. The first part investigates some of the effects in deep submicron technologies, then it looks at different fault models, and at last different techniques for delay testing and BIST approaches are investigated. The second part suggests a design for a test chip, including a circuit under test (CUT) and BIST logic. The final part investigates how the selected BIST logic can be used to reduce test time and what considerations needs to be made to get a optimal solution. The suggested design is a co-processor with SPI slave interface. Since scan based testing is commonly used today, STUMPS was selected as the BIST solution to use. Assuming that scan already is used, STUMPS will have little impact on the performance of the CUT since it is based on scan testing. During analysis it was found that several aspects of the CUT design affects the maximum obtainable delay fault coverage. It was also found that careful design of the BIST logic is necessary to get the best fault coverage and a solution that will reduce the overall cost. The results shows that a large amount of time can be saved during test by using BIST, but since the area of the circuit increases due to the BIST logic it necessarily don’t mean that one will reduce cost on the overall design. Whether or not a BIST solution will result in reduced cost will depend on the complexity of the circuit that is tested, how well the BIST logic fits this circuit, how many internal scan chains can be used, and how fast scan vectors can be applied under BIST. In this case it looks like the BIST logic is not well suited to detect the random hard to detect faults. This results in a large amount of top up patterns. This combined with the large area of the BIST logic makes it unlikely that BIST will reduce cost of this design.
220

Power Allocation In Cognitive Radio

Canto Nieto, Ramon, Colmenar Ortega, Diego January 2008 (has links)
One of the major challenges in design of wireless networks is the use of the frequency spectrum. Numerous studies on spectrum utilization show that 70% of the allocated spectrum is in fact not utilized. This guides researchers to think about better ways for using the spectrum, giving rise to the concept of Cognitive Radio (CR). Maybe one of the main goals when designing a CR system is to achieve the best way of deciding when a user should be active and when not. In this thesis, the performance of Binary Power Allocation protocol is deeply analyzed under different conditions for a defined network. The main metric used is probability of outage, studying the behavior of the system for a wide range of values for different transmission parameters such as rate, outage probability constraints, protection radius, power ratio and maximum transmission power. All the studies will be performed with a network in which we have only one Primary User for each cell, communicating with a Base Station. This user will share this cell with N potential secondary users, randomly distributed in space, communicating with their respective secondary receivers, from which only M will be allowed to transmit according to the Binary Control Power protocol. In order to widely analyze the system and guide the reader to a better comprehension of its behavior, different considerations are taken. Firstly an ideal model with no error in the channel information acquisition and random switching “off” of the user is presented. Secondly, we will try to improve the behavior of the system by developing some different methods in the decision of dropping a user when it is resulting harmful for the primary user communication. Besides this, more realistic approaches of the channel state information are performed, including Log-normal and Gaussian error distributions. Methods and modifications used to reach the obtained analytical results are presented in detail, and these results are followed by simulation performances. Some results that do not accord with theoretical expectations are also presented and commented, in order to open further ways of developing and researching.

Page generated in 0.0262 seconds