• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • Tagged with
  • 16
  • 8
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Three-Dimensional Radio Channel Modeling for Mobile Communications Systems

Pettersen, Magne January 2001 (has links)
The work described in this report is within the area of three-dimensional (3D) radio channel modeling for mobile communications. The focus was towards rural areas, because radio coverage of rural areas is more costly when using higher frequencies, comparing UMTS to GSM. In addition seasonal and environmental variations are strongest here. The model used was a 3D radar model, comprised of a 2D vertical Tx-Rx-plane component and a 3D components to include off-axis scattering. The latter components are estimated using bistatic radar techniques. The model is able to provide an accurate estimation of the path loss (signal level), and is also able to estimate time dispersion and angular dispersion, taking into account off-axis contributions. Radio frequencies around 2 GHz were selected, as these are the most important frequency bands for 3. generation mobile systems, even though the envisaged approach supports radio planning for GSM 900 and WLAN systems. A novel approach to the modeling of scattering from random rough surfaces for 3D channel modeling was developed. This amplitude/phase model is simple and accurate compared to conventional models. It makes no inherent assumption about the degree of roughness, making it suited to model all surfaces. The model outperforms the conventional models Plane surface, SPM, Kirchoff and Oren with respect to accuracy by 1.5 to 10 dB depending on the degree of roughness. An experimental methodology to characterise random rough surfaces was developed. The work characterised natural surfaces such as asphalt, grass, agriculture, and forest, each of them having a different degree of roughness. Variations due to weather and seasonal changes were taken into account. Typical surface height variations estimated were 10 mm for asphalt, 25 mm for grass, 100 mm for ploughed field and 500 mm for forest. Snow reduced the apparent roughness of ploughed field by 50 %, water on grass increased the reflection coefficient by 50 %. An analysis of the implications of the results on 3D channel modeling was performed using a demonstration model. The analysis included a comparison between 2D and 3D model prediction for different area types and land use classes. Also the prediction sensitivity to seasonal and weather variations and model parameter variations were inspected. A 3D model is necessary when the 2D component is attenuated more than typically 15 dB relative to free space, depending on area and land usage. In the network planning example Lillehammer (N) this attenuation of at least 15 dB existed in 40 % of all locations. Weather and seasonal variations may change the mean predicted value by up to 4-5 dB.
12

Practical Thermal and Electrical Parameter Extraction Methods for Modelling HBT´s and their Applications in Power Amplifiers

Olavsbråten, Morten January 2003 (has links)
A new practical technique for estimating the junction temperature and the thermal resistance of an HBT was developed. The technique estimates an interval for the junction temperature. The main assumption in the new technique is that the junction temperature can be calculated from three separate phenomena: The thermal conduction of the substrate, the thermal conduction of the metal connecting the emitter to the via holes, the effects of the via holes on the substrate temperature. The main features of the new technique are: The junction temperature and the thermal resistance are calculated from a few physical properties and the layout of the transistors, the only required software tool is MATLAB, the calculation time is very short compared to a full 3D thermal simulation, and the technique is easy to use for the circuit designer. The new technique shows good accuracy, when applied to several InGaP/GaAs HBT’s from Caswell Technology, and compared to the results from other methods. All the results fall well within the estimated junction temperature intervals from the new technique. A practical parameter extraction method for the VBIC model was developed. This method is the first published practical parameter extraction technique for the VBIC model used on an InGaP/GaAs HBT, as far as the author knows. The main features of the extraction method are: Only a few common measurements are needed, it is easy and practical to use for the circuit designer, it has good accuracy with only a few iterations. No expensive and specialized parameter extraction software is required. The only software needed is a circuit simulator. The method includes the extraction of the bias dependent forward transit time. The extraction method was evaluated on a single finger, 1x40, InGaP/GaAs HBT from Caswell Technology. Only four iterations were required to fit the measurements very well. There is less than 1 % error in both the Ic-Vce and Vbe-Vce plots. The maximum magnitude and phase error in the whole frequency range up to 40GHz are less than 1.5 dB and 15 degrees. The method is also evaluated on a SiGe HBT. Models for a single finger and an 8-finger transistor were extracted. All the dc characteristics of the modeled transistors have less than 3.5 % error. Some amplitude and phase errors are observed in the s-parameters. The errors are caused by uncertainties in the calibration due to a worn calibration substrate, high temperature drift during the measurements, and uncertainties in the physical dimensions/properties caused by lack of information from the foundry. Overall, the extracted models fit the measurements quite well. A very linear class A power amplifier has been designed using the InGaP/GaAs HBT’s from Caswell technology. The thermal junction estimation technique developed has been used to make a very good thermal layout of the power amplifier. The estimated average junction temperature is 98.6°C above the ambient temperature of 45°C, with a total dissipated power of 6.4W. The maximum junction temperature difference between the transistor fingers is less than 11°C. The PA was constructed with a ‘bus bar’ power combinder at both input and output, and optimized for maximum gain with a 10% bandwidth. The PA had a maximum output power of 34.8dBm, a 1dB compression point of 34.5dBm, a the third order intercept point of 49.9dBm, and a PAE of 27.2% at 33dBm output power.
13

Feature Extraction for Automatic Speech Recognition in Noisy Acoustic Environments / Parameteruttrekning for automatisk talegjenkjenning i støyende omgivelser

Gajic, Bojana January 2002 (has links)
<p>This thesis presents a study of alternative speech feature extraction methods aimed at increasing robustness of automatic speech recognition (ASR) against additive background noise. </p><p>Spectral peak positions of speech signals remain practically unchanged in presence of additive background noise. Thus, it was expected that emphasizing spectral peak positions in speech feature extraction would result in improved noise robustness of ASR systems. If frequency subbands are properly chosen, dominant subband frequencies can serve as reasonable estimates of spectral peak positions. Thus, different methods for incorporating dominant subband frequencies into speech feature vectors were investigated in this study.</p><p>To begin with, two earlier proposed feature extraction methods that utilize dominant subband frequency information were examined. The first one uses zero-crossing statistics of the subband signals to estimate dominant subband frequencies, while the second one uses subband spectral centroids. The methods were compared with the standard MFCC feature extraction method on two different recognition tasks in various background conditions. The first method was shown to improve ASR performance on both recognition tasks at sufficiently high noise levels. The improvement was, however, smaller on the more complex recognition task. The second method, on the other hand, led to some reduction in ASR performance in all testing conditions.</p><p>Next, a new method for incorporating subband spectral centroids into speech feature vectors was proposed, and was shown to be considerably more robust than the standard MFCC method on both ASR tasks. The main difference between the proposed method and the zero-crossing based method is in the way they utilize dominant subband frequency information. It was shown that the performance improvement due to the use of dominant subband frequency information was considerably larger for the proposed method than for the ZCPA method, especially on the more complex recognition task. Finally, the computational complexity of the proposed method is two orders of magnitude lower than that of the zero-crossing based method, and of the same order of magnitude as the standard MFCC method.</p>
14

Feature Extraction for Automatic Speech Recognition in Noisy Acoustic Environments / Parameteruttrekning for automatisk talegjenkjenning i støyende omgivelser

Gajic, Bojana January 2002 (has links)
This thesis presents a study of alternative speech feature extraction methods aimed at increasing robustness of automatic speech recognition (ASR) against additive background noise. Spectral peak positions of speech signals remain practically unchanged in presence of additive background noise. Thus, it was expected that emphasizing spectral peak positions in speech feature extraction would result in improved noise robustness of ASR systems. If frequency subbands are properly chosen, dominant subband frequencies can serve as reasonable estimates of spectral peak positions. Thus, different methods for incorporating dominant subband frequencies into speech feature vectors were investigated in this study. To begin with, two earlier proposed feature extraction methods that utilize dominant subband frequency information were examined. The first one uses zero-crossing statistics of the subband signals to estimate dominant subband frequencies, while the second one uses subband spectral centroids. The methods were compared with the standard MFCC feature extraction method on two different recognition tasks in various background conditions. The first method was shown to improve ASR performance on both recognition tasks at sufficiently high noise levels. The improvement was, however, smaller on the more complex recognition task. The second method, on the other hand, led to some reduction in ASR performance in all testing conditions. Next, a new method for incorporating subband spectral centroids into speech feature vectors was proposed, and was shown to be considerably more robust than the standard MFCC method on both ASR tasks. The main difference between the proposed method and the zero-crossing based method is in the way they utilize dominant subband frequency information. It was shown that the performance improvement due to the use of dominant subband frequency information was considerably larger for the proposed method than for the ZCPA method, especially on the more complex recognition task. Finally, the computational complexity of the proposed method is two orders of magnitude lower than that of the zero-crossing based method, and of the same order of magnitude as the standard MFCC method.
15

Joint Source-channel Coding : Development of Methods and Utilization in Image Communications

Coward, Helge January 2001 (has links)
<p>In a traditional communication system, the coding process is divided into source coding and channel coding. Source coding is the process of compressing the source signal, and channel coding is the process of error protection. It can be shown that with no delay or complexity constraints and with exact knowledge of the source and channel properties, optimal performance can be obtained with separate source and channel coding. However, joint source-channel coding can lead to performance gains under complexity or delay constraints and offer robustness against unknown system parameters.</p><p>Multiple description coding is a system for generating two (or more) descriptions of a source, where decoding is possible from either description, but decoding of higher quality is possible if both descriptions are available. This system has been proposed as a means for joint source-channel coding. In this dissertation, the multiple description coding is used to protect against loss of data in an error correcting code caused by a number of channel errors exceeding the correcting ability of the channel code. This is tried on three channel models: a packet erasure channel, a binary symmetric channel, and a block fading channel, and the results obtained with multiple description coding is compared against traditional single description coding. The results show that if a long-term average mean square error distortion measure is used, multiple description coding is not as good as single description coding, except when the delay or block error rate of the channel code is heavily constrained.</p><p>A direct source-channel mapping is a mapping from amplitude continuous source symbols to amplitude continuous channel symbols, often involving a dimension change. A hybrid scalar quantizer-linear coder (HSQLC) is a direct source-channel mapping where the memoryless source signal is quantized using a scalar quantizer. The quantized value is transmitted on an analog channel using one symbol which can take as many levels as the quantizer, and the quantization error is transmitted on the same channel by means of a simple linear coder. Thus, there is a bandwidth expansion, two channel symbols are produced per source symbol. The channel is assumed to have additive white Gaussian noise and a power constraint. The quantizer levels and the distribution of power between the two symbols are optimized for different source distributions. A uniform quantizer with an appropriate step size gives a performance close to the optimized quantizer both for a Gaussian, a Laplacian, and a uniform memoryless source. The coder performs well compared to other joint source-channel coders, and it is relatively robust against variations in the channel noise level.</p><p>A previous image coder using direct source-channel mappings is improved. This coder is a subband coder where a classification following the decorrelating filter bank assigns mappings of different rates to different subband samples according to their importance. Improvements are made to practically all the parts of the coder, but the most important one is that the mappings are changed, and particularly, the bandwidth expanding HSQLC is introduced. The coder shows large improvements compared to the previous version, especially at channel qualities near the design quality. For poor channels or high rates, the HSQLC provides a large portion of the improvement. The coder is compared against a combination of a JPEG 2000 coder and a good channel code, and the performance is competitive with the reference, while the robustness against an unknown channel quality is largely improved. This kind of robustness is very important in broadcasting and mobile communications. </p> / <p>I tradisjonelle kommunikasjonssystemer kan kodingen deles inn i kildekoding (kompresjon) og kanalkoding (feilbeskyttelse). Disse operasjonene kan ses i sammenheng, og kombinert kilde- og kanalkoding kan gi forbedringer ved begrenset kompleksitet eller forsinkelse, og øke robustheten mot ukjente systemparametre. I avhandlingen vurderes to metoder. I den første er kilde- og kanalkodingen fortsatt delvis separat, men kildekoden er gjort robust mot dekodingsfeil i kanalkoden. Dette gjøres ved flerbeskrivelseskoding (multiple description coding), der kildesignalet representeres med to beskrivelser. Dekoding er mulig fra hver beskrivelse isolert, men høyere kvalitet kan oppnås hvis begge beskrivelsene er tilgjengelig. Ved sammenligning med et tradisjonelt system viser det seg at med hensyn på midlere kvadratisk avvik er flerbeskrivelseskoding som regel mindre bra enn et tradisjonelt system. Direkte kilde-til-kanal-avbildninger er avbildninger fra amplitudekontinuerlige kildesymboler direkte til amplitudekontinuerlige kanalsymboler. En slik metode blir lansert. Der skalarkvantiseres kildesignalet, som antas minneløst, og overføres med ett symbol på en analog kanal, mens kvantiseringsfeilen overføres analogt på den samme kanalen. Systemparametrene blir optimalisert for forskjellige kilder og kanalkvaliteter. Denne koderen gir bra ytelse sammenlignet med andre kombinerte kilde- og kanalkodere, og den er relativt robust mot variasjoner i støynivået på kanalen. Direkte kilde-til-kanal-avbildninger anvendes i en delbåndskoder for stillbilder. Denne koderen, som er basert på tidligere arbeider, blir sammenlignet med en kombinasjon av en JPEG 2000-koder og en god kanalkode, og ytelsen er omtrent like bra som referansen, samtidig som robustheten mot ukjent kanalkvalitet har økt kraftig. Denne typen robusthet er svært viktig i kringkasting og mobilkommunikasjon.</p>
16

Joint Source-channel Coding : Development of Methods and Utilization in Image Communications

Coward, Helge January 2001 (has links)
In a traditional communication system, the coding process is divided into source coding and channel coding. Source coding is the process of compressing the source signal, and channel coding is the process of error protection. It can be shown that with no delay or complexity constraints and with exact knowledge of the source and channel properties, optimal performance can be obtained with separate source and channel coding. However, joint source-channel coding can lead to performance gains under complexity or delay constraints and offer robustness against unknown system parameters. Multiple description coding is a system for generating two (or more) descriptions of a source, where decoding is possible from either description, but decoding of higher quality is possible if both descriptions are available. This system has been proposed as a means for joint source-channel coding. In this dissertation, the multiple description coding is used to protect against loss of data in an error correcting code caused by a number of channel errors exceeding the correcting ability of the channel code. This is tried on three channel models: a packet erasure channel, a binary symmetric channel, and a block fading channel, and the results obtained with multiple description coding is compared against traditional single description coding. The results show that if a long-term average mean square error distortion measure is used, multiple description coding is not as good as single description coding, except when the delay or block error rate of the channel code is heavily constrained. A direct source-channel mapping is a mapping from amplitude continuous source symbols to amplitude continuous channel symbols, often involving a dimension change. A hybrid scalar quantizer-linear coder (HSQLC) is a direct source-channel mapping where the memoryless source signal is quantized using a scalar quantizer. The quantized value is transmitted on an analog channel using one symbol which can take as many levels as the quantizer, and the quantization error is transmitted on the same channel by means of a simple linear coder. Thus, there is a bandwidth expansion, two channel symbols are produced per source symbol. The channel is assumed to have additive white Gaussian noise and a power constraint. The quantizer levels and the distribution of power between the two symbols are optimized for different source distributions. A uniform quantizer with an appropriate step size gives a performance close to the optimized quantizer both for a Gaussian, a Laplacian, and a uniform memoryless source. The coder performs well compared to other joint source-channel coders, and it is relatively robust against variations in the channel noise level. A previous image coder using direct source-channel mappings is improved. This coder is a subband coder where a classification following the decorrelating filter bank assigns mappings of different rates to different subband samples according to their importance. Improvements are made to practically all the parts of the coder, but the most important one is that the mappings are changed, and particularly, the bandwidth expanding HSQLC is introduced. The coder shows large improvements compared to the previous version, especially at channel qualities near the design quality. For poor channels or high rates, the HSQLC provides a large portion of the improvement. The coder is compared against a combination of a JPEG 2000 coder and a good channel code, and the performance is competitive with the reference, while the robustness against an unknown channel quality is largely improved. This kind of robustness is very important in broadcasting and mobile communications. / I tradisjonelle kommunikasjonssystemer kan kodingen deles inn i kildekoding (kompresjon) og kanalkoding (feilbeskyttelse). Disse operasjonene kan ses i sammenheng, og kombinert kilde- og kanalkoding kan gi forbedringer ved begrenset kompleksitet eller forsinkelse, og øke robustheten mot ukjente systemparametre. I avhandlingen vurderes to metoder. I den første er kilde- og kanalkodingen fortsatt delvis separat, men kildekoden er gjort robust mot dekodingsfeil i kanalkoden. Dette gjøres ved flerbeskrivelseskoding (multiple description coding), der kildesignalet representeres med to beskrivelser. Dekoding er mulig fra hver beskrivelse isolert, men høyere kvalitet kan oppnås hvis begge beskrivelsene er tilgjengelig. Ved sammenligning med et tradisjonelt system viser det seg at med hensyn på midlere kvadratisk avvik er flerbeskrivelseskoding som regel mindre bra enn et tradisjonelt system. Direkte kilde-til-kanal-avbildninger er avbildninger fra amplitudekontinuerlige kildesymboler direkte til amplitudekontinuerlige kanalsymboler. En slik metode blir lansert. Der skalarkvantiseres kildesignalet, som antas minneløst, og overføres med ett symbol på en analog kanal, mens kvantiseringsfeilen overføres analogt på den samme kanalen. Systemparametrene blir optimalisert for forskjellige kilder og kanalkvaliteter. Denne koderen gir bra ytelse sammenlignet med andre kombinerte kilde- og kanalkodere, og den er relativt robust mot variasjoner i støynivået på kanalen. Direkte kilde-til-kanal-avbildninger anvendes i en delbåndskoder for stillbilder. Denne koderen, som er basert på tidligere arbeider, blir sammenlignet med en kombinasjon av en JPEG 2000-koder og en god kanalkode, og ytelsen er omtrent like bra som referansen, samtidig som robustheten mot ukjent kanalkvalitet har økt kraftig. Denne typen robusthet er svært viktig i kringkasting og mobilkommunikasjon.

Page generated in 0.0519 seconds