• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 192
  • 137
  • 2
  • Tagged with
  • 329
  • 302
  • 301
  • 290
  • 229
  • 214
  • 73
  • 69
  • 15
  • 11
  • 8
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Cochlear Features for Acoustic Segmentation

Hamar, Jarle Bauck January 2009 (has links)
<p>This work explores an alternative set of features to the frequently used melfrequency coefficients (MFCCs). The cochlea features simulate the nerve fibre signal sent from the ear to the brain. In this study the usage of the cochlea features for acoustic segmentation is of main interest. Both the cochlea features and a variant of combining them with zero crossing with peak amplitude (ZCPA) have been used as input to an acoustic segmentation algorithm. Also experiments using the cochlea features as input to an artificial neural network (ANN) for classifying each vector as boundary/non-boundary have been performed. The results show that the features contain a great deal of information regarding the speech signal. Especially the combination of cochlea and ZCPA are giving good results.</p>
152

Radiowave Propagation at Ka-band (20/30 GHz) for Satellite Communication in High-Latitude Regions

Rytír, Martin January 2009 (has links)
<p>Atmospheric impairments are a major obstacle in satellite communications at Ka-band in high latitude regions. This report gives a short summary of the existing models that can be used to model the impairments. Further a simple measurement system based on satellite beacon reception is designed using locally available and off-the-shelf components as well as locally manufactured ones. Performance of the components as well as of the whole system is examined and found to be in agreement with the expected values with overall system figure of merit (G/T) of 21 dB/K. Data from 25 days of measurements are presented and compared with model predictions. The comparison points to possible deficiencies in some of the system components that should be assessed for further use. Most notably low amplitude accuracy of the spectrum analyzer and a low sampling rate of the data acquisition system.</p>
153

Realization of Underwater Acoustic Networks.

Riksfjord, Håkon January 2009 (has links)
<p>This work contains a study of underwater acoustic networks. The concept of underwater acoustic networks has been presented with its benefits and drawbacks. An overview of the marine research areas oceanography, seismology, waterside security, marine pollution and marine biology has been made and a review of conventional methods and instrumentation committed. The research methods used today have been compared with the potential of underwater acoustic networks as a platform for maritime applications. Underwater acoustic networks were reviewed as feasible within all areas with some restrictions. The fact that respectable data rate is best achieved for nodes deployed in a high density grid give limitations on the coverage area. Battery as an energy source limits the life span of an underwater acoustic network and makes it best suited for missions for short term monitoring, if not a recharging technology is applied. The energy restrictions also put constraint on the amount of sensing done and the temporal solution in measurements. Underwater acoustic networks were found applicable for intrusion detection in waterside security to increase the range of current ultrasonic surveillance systems or realize distributed systems for passive diver detection. In oceanography and pollution monitoring current in situ sensors may enable underwater acoustic networks to do autonomous synoptic sampling of limited areas to measure a number of parameters, e.g. oxygen, turbidity, temperature and salinity. For seismic exploration this technology might save costs for permanent seismic installations in constant monitoring of producing oil fields. It might also aid marine biologists in habitat monitoring.</p>
154

The Design of a Low Cost Beacon Receiver System using Software Defined Radio

Mikkelsen, Eivind Brauer January 2009 (has links)
<p>Due to increase in ship traffic and activities related to oil and gas there is currently grate interest in the northern regions of Norway. Satellite communications to these areas i.e. north of the polar circle is however challenging due to low elevation angles and restricted visibility of geostationary satellites. Limited work has been done to study the propagation effects at theses latitudes and low elevation angles, especially at millimeter frequencies and for maritime communications. Some measurements have been conducted at Svalbard [5] and in Canada [5.1]. The studies from Svalbard were conducted at Ku-band frequencies whilst the Canadian measurements were conducted at 38 GHz. Non of the two did however include maritime measurements. Further measurements are therefore needed to characterize the propagation effects under these conditions. A beacon receiver is radio which is used to detect and measure the signal strength of a transmitted radio beacon signal. Beacon signals transmitted by satellites are often low power continuous wave signals intended for antenna steering and power control purposes. These signals are well suited for propagation measurement due to their constant transmits power and frequency. Propagation research often relies on beacon measurements along with other information such as weather data and radiometer readings. This thesis discusses the design and implementation of a low cost beacon receiver based on digital signal processing techniques and software defined radio. The intention was originally to design a Ka-band (20 GHz) receiver. This was however extended to a general purpose beacon receiver intended to operate at an L-band intermediate frequency. Different architectures and realizations are discussed with emphasis on costs and performance. It is shown that a 1.2 m antenna, receiving a Ka-band beacon with, 9 dBW EIRP would produce a signal level of about -130 dBm at its output. This would in turn yield a C/N0 ratio of about 46 dBHz at 76°North, assuming a receiver with overall noise figure of 1.5 dB and clear air conditions. Based on the link budget calculations two different beacon receiver designs are proposed. One based on the superheterodyne receiver architecture realized with standard RF-components such as mixers and amplifiers with coaxial connectors. The second design is based on the universal software radio peripheral, (USRP), which is a software radio, intended to allow personal computers function as radio transceivers. It was found that building a complete beacon receiver from standard RF-components would require about 100.000 NOK to achieve the wanted performance. This includes a complete system with antenna, front-end and baseband receiver. Due to the relatively inexpensive hardware (4900 NOK) of the USRP and the availability of front-end plug inn boards in the required intermediate frequency range the USRP was chosen as the hardware portion of the receiver. Linearity measurments and observations of the USRP output spectrum shows a linear dynamic range of about 60 dB which is found sufficient for beacon measurements. A Ku-band antenna intended for television reception has been used to receive a 12.2 GHz beacon transmitted by Eutelsat W3A Software code was developed based on the GNU radio framework in order to use the USRP as a beacon receiver. A number of issues were discovered during this work: • GNU radio does not contain filters for spectral averaging • Attempts to implement additional functionality in software proved challenging due to limitations in computational speed Both of the two issues affected the performance of the beacon receiver. Modifications and additions to the GNU radio software is therefore suggested for future work</p>
155

Multiple Sensor Data Analysis, Fusion, and Communication for ULTRASPONDER

Gutiérrez Perera, Carlos Sergio January 2009 (has links)
<p>This Thesis covers a part of the study comprised in the ULTRASPONDER (In vivo Ultrasonic Transponder System for Biomedical Applications) Project. The main area of interest is to study how to combine different signals which can imply an improvement onto the diagnostic information carried by the ECG records. It is believed that monitoring blood pressure inside the heart may give vital information to correctly diagnose and provide treatment for chronic heart failure patients. Moreover, heart rate variability analysis has proved to be one of the most important risk predictors in detecting ventricular tachycardias and fluttering. With this focus, the Thesis provides a solid background on cardiac anatomy and physiology, uncommon in many engineering texts, in order to understand the biological changes that affect the waveforms, for then moving to the performance of a theoretical and statistical study in order to find correlations, redundancies, or new information content in the signals intended to cohabit in the ULTRASPONDER control unit, namely, signals from the intra-cavity pressure sensors, ECG electrodes and other type of sensors, as well as heart rate time-series. Because this control unit, implanted underneath the patient's skin, must handle several different signals and transmit clinically relevant information in a power constraint manner to an external device, which may have much larger amount of resources, all signal processing performed in the context of the control unit must be kept under a reasonable limit that permits to efficiently extract information about the patient's health without decreasing the device's lifetime. We have implemented two time-domain QRS complexes detection systems, two simple beat classification algorithms based on beat-to-beat segmentation and template correlation, and some HRV measures as fundamental elements of ECG signal processing. Detection performance is analyzed from a critical point of view, considering several not so common parameters, such as Qalpha and MCC, which collect much more information than the usual sensitivity and predictivity assessments. A closed-loop DPCM system was chosen for the encoding and compression tasks, experiments showing its validity for ECG and blood pressure signals, although advising against its usage for HR time series. Compression performance is analyzed in terms of compression ratio attained and distorsion introduced. A novel measure called "compressibility quotient" (CQ) is presented as an indicator of the balance between theoretical compression limits marked by the sample entropy and actual compression obtained with a concrete scheme, in terms of the tradeoff CR-distorsion. A strong correlation between signal-to-noise ratio and CQ was found, implying that this measure might have some relevance for analyzing real compression possibilities under some quality criteria. The approaches followed in this Thesis, particularly regarding the theoretical study and data fusion comments, are valid for the ECG, blood pressure and heart rate signals considered, without detriment to be likewise applied to new signals that might become of interest in the future years. When new sensors are implemented to provide distinct signals, a theoretical study can include them to find out their usefulness and relation to the ones already considered. Data fusion should then be reviewed to assess the validity and convenience of the communication system for the new set of significant signals.</p>
156

Dynamic Bias for RF Class A Power Amplifiers

Caharija, Walter January 2009 (has links)
<p>This thesis focuses on class A radio frequency power amplifiers in dynamic supply modulation architectures (dynamic bias). These are promising efficiency enhancement techniques where the device is driven harder by varying its bias signals. Non linearities that arise are considered as digitally compensated through, for example, digital predistortion (DPD). Bias signals are meant as functions of the PA?s output power level (P out). Therefore, the input power level (P in) as well as the feeding signals are thought as quantities the amplifier need to give a certain P out. The selected set of bias points the device sweeps through is called bias trajectory or bias path. A tool to find a suitable bias trajectory is developed considering the requirements a class A power amplifier should satisfy: high power added efficiency, acceptable gain and output phase variations as P out changes (allowing a DPD algorithm to be effective), low harmonic distortion and not too complicated bias signals patterns. The tool consists of two softwares: ADS and Matlab. ADS simulates the device under test while Matlab allows the user to analyze the data and find a suitable bias path. Once a trajectory is identified, ADS can sweep along it and give more information on linearity and efficiency through, for instance, 2-tone harmonic balance simulations. Note that only static characteristics are evaluated and memory effects disregarded. The path searching algorithm is then applied to a HBT transistor, at a frequency of 1.9GHz and to a complete pHEMT class A PA (frequency of 6Ghz). In both cases a suitable trajectory is identified and analyzed back in ADS. The Matlab plots are qualitatively similar to each other when switching form one device to another. The HBT transistor has then been tested in the laboratory and static measurements have been performed showing good agreement with simulations. Keywords: Bias trajectory, dynamic bias, efficiency, HBT, linearity, pHEMT, power amplifier</p>
157

WISA vs. WLAN: Co-existence challenges : - A tool for WLAN performance testing

Strand, Erlend Barstad January 2007 (has links)
Wireless Interface for Sensors and Actuators (WISA) is ABB's proprietary wireless protocol for industrial automation on the factory floor. It operates in the 2.4GHz ISM band. Wireless Local Area Networks (WLANs), which typically occupy a fixed portion of the same 2.4GHz ISM band, are becoming more and more common on the factory floor. This raises a question of co-existence and how the performance of traffic over WLAN is affected when interfered by WISA. This report is a result of the development of a software tool and assembly of hardware that can aid the future testing of the effect WISA has on nearby WLANs. Together with the explanation of the usage of this software tool, this report will also investigate different arrangements of hardware components that are used to demonstrate and test the functionality of this new software tool. The software tool and the hardware components enable the measurement of important traffic metrics between two computers that communicate over a WLAN. The hardware components include a WISA Base Station (BS) that is configurable through the software tool and is used to cause interference on the WLAN.
158

Suppression of Radar Echoes produced below the Liquid Surface close to the Base of a Storage Container for LNG

Andersen, Arne Helge January 2007 (has links)
Bunn absorbent ble designet til å matche overliggende mediet.
159

WISA vs. WLAN: Co-existence challenges : Analysis of frequency-hopping sequences

Sandnes, Erik Skarstein January 2007 (has links)
Wireless Interface for Sensors and Actuators (WISA) is ABB’s proprietary wireless protocol for industrial automation. It operates in the 2.4 GHz ISM band, as do nearly allWireless LAN systems. WISA does frequency-hopping (FH) over most of the ISM band, but has currently no means of avoiding parts of the band occupied by other wireless systems. The objective of the diploma project was to create a Matlab based simulation tool that can (i) analyze cross-correlation between FH sequences in two closely spaced WISA cells, and (ii) generate new FH sequences which avoid a user-selectable portion of the frequency band. New frequency-hopping sequences were designed using Galois field computations for creating periodic sequences with minimum correlation. The developed Matlab simulation module did indeed meet the objectives. However the algorithm for subband-allocation is not optimal and will for some cases not give maximum utilization of the available frequency band. Analysis of the existing FH algorithm confirmed that some sequence pairs are non-ideal in the sense that their inevitable frequency collisions are not spread evenly over all relative shifts between the sequences, but concentrated to a few of these shifts. It was also pointed out that not all cell ids met the desired requirement of large separation between transmissions occurring on consecutive frames. Analysis of the new FH sequences, which avoid a user-selected portion of the frequency band, showed that these had many of the same properties as the existing algorithm. It was possible to find sequence pairs with low correlation and thus allow multiple cells to operate in the same radio space.
160

Feedback-based Error Control Methods for H.264

Selnes, Stian January 2007 (has links)
Many network-based multimedia applications transmit real-time media over unreliable networks, i.e. data may be lost or corrupted on its route from sender to receiver. Such errors may cause a severe degradation in perceptual quality. It is important to apply techniques that improve the robustness against errors, in order to ensure that the receiver is able to playback the media with the best attainable quality. Today, most ER schemes for video employ proactive error resilient encoding. These schemes add redundant information into the encoded video stream in order to increase the robustness against potential errors. Because of this, most proactive schemes suffer from a significant reduction of the coding efficiency. Another approach is to adjust the encoder operations based on feedback information from the decoder, e.g. to repair corrupted regions based on reports of lost data. Feedback-based ER schemes normally improves the coding efficiency compared with proactive schemes. Moreover, they adjust rapidly to time-varying network conditions. The objective of this thesis is to develop and evaluate a feedback-based ER scheme conforming to the H.264/AVC standard and applicable for real-time low-delay video applications. The scheme is referred to as FBIR. The performance of FBIR will be compared with an existing proactive ER scheme, known as IPLR. Special attention is given to the applied feedback mechanism, RTP/AVPF. RTP/AVPF is a new (2006) feedback protocol. Basically, it specifies two modifications/additions to the RTCP: First, it modifies the timing algorithm to enable early feedback, while not exceeding the RTCP bandwidth constraint. Second, new RTCP message types are defined, which provides information useful for error control purposes. FBIR employs RTP/AVPF to provide timely feedback of lost packets from the decoder to the encoder. Upon reception of this feedback, the encoder use a fast error tracking algorithm to locate the erroneous regions. Finally, the regions that are assumed to be visually corrupted after decoding are intra refreshed. IPLR is an ER scheme developed for use in a commercial video communication system. It applies a motion-based intra refresh routine. The comparison is carried out by online simulations with various network environments (0, 1, 3 and 5% loss rate; 50 and 200 ms latency), bit rates (64, 144 and 384 kbit/s) and video sequences. First, the video is encoded and transmitted in real-time to the decoder via a network emulator. This emulator generates the desired network characteristics. The receiver decodes the video in real-time and transmits feedback information back to the encoder. The encoder adjusts its encoding process according to this feedback. The H.264/AVC reference software is modified and used as codec. Finally, objective quality measures are obtained by calculating the PSNR of the decoded videos. In addition, some visual inspection is performed. Isolated measures on the RTP/AVPF transmission algorithm are also performed. These show that RTP/AVPF is able to provide timely feedback for error control purposes for a great number of applications and network environments. However, the experienced feedback delay may be increased by numerous factors, e.g. the network latency, the packet loss rate, the session bandwidth, and the number of receivers. This may decrease the performance of ER schemes utilizing RTP/AVPF. RTP/AVPF is fairly easy to implement since it only modifies the RTCP timing algorithm and adds new RTCP message types. RTP/AVPF may be used in combination with other standards in order to extend the available feedback information. Hence, RTP/AVPF enables timely feedback for use in a wide range of multimedia applications. The PSNR measurements show that FBIR always obtains higher objective quality than IPLR for error free transmissions. This does not, however, necessarily affect the perceptual quality if the bit rate is high. FBIR achieves higher PSNR in other situations as well, such as for very low loss rates, low or medium bit rates, and for sequences with high or medium motion activity. Conversely, IPLR performs better for low motion sequences encoded at high bit rates when the loss rate exceeds a certain threshold, typically about 1%. It is also shown that the performance of FBIR may be reduced if the network latency increases. Visually, the main difference between the two schemes is that FBIR recovers all corrupted regions at one instant, while IPLR performs a gradual refresh. The average time before recovery is somewhat shorter for IPLR. The differences between FBIR and IPLR are mainly caused by two factors. First, using FBIR results in less intra coding and thus better coding efficiency. Second, the FBIR scheme does not repair errors until the encoder receives the feedback. Usually, this happens after IPLR has repaired most of the corrupted region. In short, one can say that FBIR provides medium error robustness and high coding efficiency, in contrast to IPLR's high robustness and low coding efficiency. While FBIR's performance may be reduced by network characteristics such as increased latency, IPLR is unaffected by these factors. For error free transmissions, FBIR does not significantly reduce the coding gain compared with a non-robust encoding scheme. Still, it provides a good robustness against corruption in error-prone networks. Thus, all real-time video systems that benefit from immediate feedback should strongly consider to employ FBIR or similar feedback-based ER schemes.

Page generated in 0.0367 seconds