• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • Tagged with
  • 16
  • 8
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effects of the Force Distribution in Dry Granular Materials

Bratberg, Ivar January 2004 (has links)
<p>This work concentrates on the force-network of dry granular materials. The dynamical and static properties of the force-network are studied through simulations and experiments.</p><p>We study the structural properties of two-dimensional granular packings prepared by random deposition from a source line. We consider a class of random ballistic deposition models based on single-particle relaxation rules controlled by a critical angle, and show that these local rules can be formulated as rolling friction in the framework of dynamic methods for the simulation of granular materials. We find that the packing prepared by random deposition models is generically unstable, and undergoes dynamic rearrangements. As a result, the dynamic method leads systematically to a higher solid fraction than the geometrical model for the same critical angle. We characterise the structure of the packings generated by both methods in terms of solid fraction, contact connectivity and anisotropy. Our analysis provides evidence for four packing regimes as a function of solid fraction, the mechanisms of packing growth being different in each regime.</p><p>Using the Contact Dynamics method, the stick-slip response of an pushed granular column is analysed and a power law is found for the distribution of slips with exponent value 1:8. The exponent is invariable to perturbations of the different physical parameters. Two velocity regimes were found: stick-slip and steady state. These two regimes could be observed for very simple systems, making a detailed analyses possible.</p><p>An experiments on narrow granular columns to test the validity of the Janssen law under such conditions has been done. The weight at the bottom of the cylinder and the compression and movement of the packing have been measured. The apparent mass dependence on height is not in good agreement with the Janssen law using a oneparameter fit. A two-parameter fit yilded good results for the apparent mass during upwards and downwards movement at constant velocity of the granular column inside the enclosing cylinder. The necessity of two parameters has its origin in rotational frustration. The dependence of the apparent mass on the diameter of the column does not follow the Janssen law. Rather, it depends strongly on details of the packing. </p>
2

Effects of the Force Distribution in Dry Granular Materials

Bratberg, Ivar January 2004 (has links)
This work concentrates on the force-network of dry granular materials. The dynamical and static properties of the force-network are studied through simulations and experiments. We study the structural properties of two-dimensional granular packings prepared by random deposition from a source line. We consider a class of random ballistic deposition models based on single-particle relaxation rules controlled by a critical angle, and show that these local rules can be formulated as rolling friction in the framework of dynamic methods for the simulation of granular materials. We find that the packing prepared by random deposition models is generically unstable, and undergoes dynamic rearrangements. As a result, the dynamic method leads systematically to a higher solid fraction than the geometrical model for the same critical angle. We characterise the structure of the packings generated by both methods in terms of solid fraction, contact connectivity and anisotropy. Our analysis provides evidence for four packing regimes as a function of solid fraction, the mechanisms of packing growth being different in each regime. Using the Contact Dynamics method, the stick-slip response of an pushed granular column is analysed and a power law is found for the distribution of slips with exponent value 1:8. The exponent is invariable to perturbations of the different physical parameters. Two velocity regimes were found: stick-slip and steady state. These two regimes could be observed for very simple systems, making a detailed analyses possible. An experiments on narrow granular columns to test the validity of the Janssen law under such conditions has been done. The weight at the bottom of the cylinder and the compression and movement of the packing have been measured. The apparent mass dependence on height is not in good agreement with the Janssen law using a oneparameter fit. A two-parameter fit yilded good results for the apparent mass during upwards and downwards movement at constant velocity of the granular column inside the enclosing cylinder. The necessity of two parameters has its origin in rotational frustration. The dependence of the apparent mass on the diameter of the column does not follow the Janssen law. Rather, it depends strongly on details of the packing.
3

Improved Receivers for Digital High Frequency Communications : Iterative Channel Estimation, Equalization, and Decoding (Adaptive Turbo Equalization)

Otnes, Roald January 2002 (has links)
<p>We address the problem of improving the throughput and the availability of digital communications in the High Frequency (HF, 3-30 MHz) band. In standardized military waveforms, the data is protected by an error-correcting code (ECC), and the code bits are shuffled by an interleaver and mapped onto a signal constellation for modulation onto a single carrier. Training sequences are multiplexed into the stream of transmitted symbols to aid the receiver in tracking the channel variations. The channel imposes severe time-varying intersymbol interference (ISI) as well as additive noise. Conventional receivers for such a system would first perform adaptive equalization (to mitigate the ISI) and symbol demapping, deinterleave the received code bits, and finally perform decoding, where the redundancy of the ECC is used to make high-quality decisions on the transmitted data bits even when bit errors have been introduced by the channel. Such a receiver is suboptimal because the equalizer does not make use of the redundancy introduced by the ECC, and is outperformed by an iterative scheme called turbo equalization. In turbo equalization, a.k.a. iterative equalization and decoding, soft information on the code bits is fed back from the decoder to the equalizer in an iterative fashion, and by performing the equalization and decoding tasks several times the bit error rates become significantly smaller than for a conventional “single-pass” receiver. Since we are dealing with an unknown time-varying channel, we must also perform channel estimation. We include channel estimation in the iterative loop of the turbo equalizer, using soft information fed back from the decoder as “training sequences” between the ordinary transmitted training sequences. Then, the receiver performs iterative channel estimation, equalization, and decoding, which can also be called adaptive turbo equalization. We have proposed a receiver using adaptive turbo equalization, and performed simulations using the MIL-STD-188-110 waveform at 2400 bps, transmitted over an ITU-R poor channel (a commonly used channel to test HF modems). We find that the proposed receiver outperforms a conventional receiver by 2-3 dB in terms of required signal-to-noise ratio to achieve a certain bit error rate. In this dissertation, we give an introduction to the fields of HF communications and standardized HF waveforms, channel modelling, and turbo equalization. We present an analysis of measured channel data to motivate our research in turbo equalization. We then present our research contributions to the field of turbo equalization: A low-complexity soft-in soft-out equalizer for time-varying channels, a comparative study of channel estimation algorithms using soft information as the input signal, and an investigation of adaptive turbo equalization using a technique known as EXIT charts. Finally, we present our main practical result, which is our proposed receiver and the corresponding simulation results. </p>
4

Segmentation and labelling of speech

Kvale, Knut January 1993 (has links)
<p>During the last decades, significant research efforts have been aimed at devoloping speech technology products such as speech input and output systems. In order to train and evaluate these systems huge speech databases have been compiled in laboratories all over the world. However, neither the recording protocols nor the annotation conventions used have been standardised, making assessments of speech technology products across laboratories and languages difficult. The aim of this thesis work is to contribute towards a standardisation of segmentation and labelling of multi-lingual speech corpora.</p><p>Segmentation is here defined as the process of dividing the speech pressure waveform into directly succeeding discrete parts. These segments are labelled with phoneme symbols. Continuous speech from five different languages; English, Danish, Swedish, Italien, and Norwegian, have been studied with respect to segmentation and labelling.</p><p>Due to coarticulation effects, exact segmentation of speech as defined above is theoretically impossible, but the segmentation and labelling provides a link between the speech waveform and the phonological labels which is nevertheless essential for both speech research and for the development of speech technology. Thus, this thesis takes a pragmatic approach to the segmentation and labelling of speech and suggests methods to make the annotation process accurate and reliable enough for practical use.</p>
5

Improved Receivers for Digital High Frequency Communications : Iterative Channel Estimation, Equalization, and Decoding (Adaptive Turbo Equalization)

Otnes, Roald January 2002 (has links)
We address the problem of improving the throughput and the availability of digital communications in the High Frequency (HF, 3-30 MHz) band. In standardized military waveforms, the data is protected by an error-correcting code (ECC), and the code bits are shuffled by an interleaver and mapped onto a signal constellation for modulation onto a single carrier. Training sequences are multiplexed into the stream of transmitted symbols to aid the receiver in tracking the channel variations. The channel imposes severe time-varying intersymbol interference (ISI) as well as additive noise. Conventional receivers for such a system would first perform adaptive equalization (to mitigate the ISI) and symbol demapping, deinterleave the received code bits, and finally perform decoding, where the redundancy of the ECC is used to make high-quality decisions on the transmitted data bits even when bit errors have been introduced by the channel. Such a receiver is suboptimal because the equalizer does not make use of the redundancy introduced by the ECC, and is outperformed by an iterative scheme called turbo equalization. In turbo equalization, a.k.a. iterative equalization and decoding, soft information on the code bits is fed back from the decoder to the equalizer in an iterative fashion, and by performing the equalization and decoding tasks several times the bit error rates become significantly smaller than for a conventional “single-pass” receiver. Since we are dealing with an unknown time-varying channel, we must also perform channel estimation. We include channel estimation in the iterative loop of the turbo equalizer, using soft information fed back from the decoder as “training sequences” between the ordinary transmitted training sequences. Then, the receiver performs iterative channel estimation, equalization, and decoding, which can also be called adaptive turbo equalization. We have proposed a receiver using adaptive turbo equalization, and performed simulations using the MIL-STD-188-110 waveform at 2400 bps, transmitted over an ITU-R poor channel (a commonly used channel to test HF modems). We find that the proposed receiver outperforms a conventional receiver by 2-3 dB in terms of required signal-to-noise ratio to achieve a certain bit error rate. In this dissertation, we give an introduction to the fields of HF communications and standardized HF waveforms, channel modelling, and turbo equalization. We present an analysis of measured channel data to motivate our research in turbo equalization. We then present our research contributions to the field of turbo equalization: A low-complexity soft-in soft-out equalizer for time-varying channels, a comparative study of channel estimation algorithms using soft information as the input signal, and an investigation of adaptive turbo equalization using a technique known as EXIT charts. Finally, we present our main practical result, which is our proposed receiver and the corresponding simulation results.
6

Segmentation and labelling of speech

Kvale, Knut January 1993 (has links)
During the last decades, significant research efforts have been aimed at devoloping speech technology products such as speech input and output systems. In order to train and evaluate these systems huge speech databases have been compiled in laboratories all over the world. However, neither the recording protocols nor the annotation conventions used have been standardised, making assessments of speech technology products across laboratories and languages difficult. The aim of this thesis work is to contribute towards a standardisation of segmentation and labelling of multi-lingual speech corpora. Segmentation is here defined as the process of dividing the speech pressure waveform into directly succeeding discrete parts. These segments are labelled with phoneme symbols. Continuous speech from five different languages; English, Danish, Swedish, Italien, and Norwegian, have been studied with respect to segmentation and labelling. Due to coarticulation effects, exact segmentation of speech as defined above is theoretically impossible, but the segmentation and labelling provides a link between the speech waveform and the phonological labels which is nevertheless essential for both speech research and for the development of speech technology. Thus, this thesis takes a pragmatic approach to the segmentation and labelling of speech and suggests methods to make the annotation process accurate and reliable enough for practical use.
7

Optimal Bit and Power Constrained Filter Banks

Hjørungnes, Are January 2000 (has links)
<p>In this dissertation, two filter banks optimization problems are studied. The first problem is the optimization of filter banks used in a subband coder under a bit constraint. In the second problem, a multiple input multiple output communication system is optimized under a power constraint. Three different cases on the filter lengths are considered: unconstrained length filter banks, transforms, and finite impulse response filter banks with arbitrary given filter lengths. </p><p>In source coding and multiple input multiple output communication systems, transforms and filter banks are used to decompose the source in order to generate samples that are partly decorrelated. Then, they are more suitable for source coding or transmission over a channel than the original source samples. Most transformers and filter banks that are studies in the literature have the perfect reconstruction property. In this dissertation, the perfect reconstruction condition is relaxed, so that the transforms and filter banks are allowed to belong to larger sets, which contain perfect reconstruction transforms and filter banks as subsets. </p><p>Jointly optimal analysis and synthesis filter banks and transforms are proposed under the bit and power constraints for all the three filter length cases. For a given number of bits used in the quantizers or for a given channel with a maximum allowable input power, the analysis and synthesis transforms and filter banks are jointly optimized such that the mean square error between the original and decoded signal is minimized. Analytical expressions are obtained for unconstrained length filter banks and transforms, and an iterative numerical algorithm is proposed in the finite impulse response filter bank case. </p><p>The channel in the communication problem is modelled as a known multiple input multiple output transfer matrix with signal independent additive vector noise having known second order statistics. A pre- and postprocessor containing modulation is introduced in the unconstrained length filter bank system with a power constraint. It is shown that the performance of this system is the same as the performance of the power constrained transform coder system when the dimensions of the latter system approach infinity.</p><p>In the source coding problem, the results are obtained with different quantization models. In the simplest model, the subband quantizers are modelled as additive white signal independent noise sources. The proposed unconstrained length filter banks, and it is shown that the proposed transform has better performance than the Karhunen-Loève transform. Also, the proposed transform coder has the same performance as a transform coder using a reduced rank Karhunen- Loève analysis transform with jointy optimal bit allocation and Wiener synthesis transform. The proposed finite impulse response filter banks have at least as good theoretical rate distortion performance as the perfect reconstruction filter banks and the finite impulse response Wiener filter banks used in the comparison. </p><p>A practical coding system is introduced where the coding of the subband signals is performed by uniform threshold quantizers using the centroids as representation levels. It is shown that there is a mismatch between the theoretical and practical results. Three methods for removing this mismatch are introduced. In the two first methods, the filter banks them selves are unchanged, but the coding method of the subband signals is changed. In the first of these two methods, quantizers are derived such that the additive coding noise and subband signals are uncorrelated. Subtractive dithering is the second method used for coding of the subband signals. In the third method, a signal dependent colored noise model is introduced, and this model is used to redesign the filter banks. In all three methods, good correspondence is achieved between the theoretical and practical results, and comparable or better practical rate distortion performance is achieved by the proposed methods compared to systems using perfect reconstruction filter banks and finite impulse response Wiener synthesis filter banks. </p><p>Finally, conditions for when finite impulse response filter banks are optimal are derived. </p>
8

Practical Thermal and Electrical Parameter Extraction Methods for Modelling HBT´s and their Applications in Power Amplifiers

Olavsbråten, Morten January 2003 (has links)
<p>A new practical technique for estimating the junction temperature and the thermal resistance of an HBT was developed. The technique estimates an interval for the junction temperature. The main assumption in the new technique is that the junction temperature can be calculated from three separate phenomena: The thermal conduction of the substrate, the thermal conduction of the metal connecting the emitter to the via holes, the effects of the via holes on the substrate temperature. The main features of the new technique are: The junction temperature and the thermal resistance are calculated from a few physical properties and the layout of the transistors, the only required software tool is MATLAB, the calculation time is very short compared to a full 3D thermal simulation, and the technique is easy to use for the circuit designer. The new technique shows good accuracy, when applied to several InGaP/GaAs HBT’s from Caswell Technology, and compared to the results from other methods. All the results fall well within the estimated junction temperature intervals from the new technique. A practical parameter extraction method for the VBIC model was developed. This method is the first published practical parameter extraction technique for the VBIC model used on an InGaP/GaAs HBT, as far as the author knows. The main features of the extraction method are: Only a few common measurements are needed, it is easy and practical to use for the circuit designer, it has good accuracy with only a few iterations. No expensive and specialized parameter extraction software is required. The only software needed is a circuit simulator. The method includes the extraction of the bias dependent forward transit time. The extraction method was evaluated on a single finger, 1x40, InGaP/GaAs HBT from Caswell Technology. Only four iterations were required to fit the measurements very well. There is less than 1 % error in both the Ic-Vce and Vbe-Vce plots. The maximum magnitude and phase error in the whole frequency range up to 40GHz are less than 1.5 dB and 15 degrees. The method is also evaluated on a SiGe HBT. Models for a single finger and an 8-finger transistor were extracted. All the dc characteristics of the modeled transistors have less than 3.5 % error. Some amplitude and phase errors are observed in the s-parameters. The errors are caused by uncertainties in the calibration due to a worn calibration substrate, high temperature drift during the measurements, and uncertainties in the physical dimensions/properties caused by lack of information from the foundry. Overall, the extracted models fit the measurements quite well. A very linear class A power amplifier has been designed using the InGaP/GaAs HBT’s from Caswell technology. The thermal junction estimation technique developed has been used to make a very good thermal layout of the power amplifier. The estimated average junction temperature is 98.6°C above the ambient temperature of 45°C, with a total dissipated power of 6.4W. The maximum junction temperature difference between the transistor fingers is less than 11°C. The PA was constructed with a ‘bus bar’ power combinder at both input and output, and optimized for maximum gain with a 10% bandwidth. The PA had a maximum output power of 34.8dBm, a 1dB compression point of 34.5dBm, a the third order intercept point of 49.9dBm, and a PAE of 27.2% at 33dBm output power. </p><p> </p>
9

Three-Dimensional Radio Channel Modeling for Mobile Communications Systems

Pettersen, Magne January 2001 (has links)
<p>The work described in this report is within the area of three-dimensional (3D) radio channel modeling for mobile communications. The focus was towards rural areas, because radio coverage of rural areas is more costly when using higher frequencies, comparing UMTS to GSM. In addition seasonal and environmental variations are strongest here. The model used was a 3D <i>radar mode</i>l, comprised of a 2D vertical Tx-Rx-plane component and a 3D components to include off-axis scattering. The latter components are estimated using bistatic radar techniques. The model is able to provide an accurate estimation of the path loss (signal level), and is also able to estimate time dispersion and angular dispersion, taking into account off-axis contributions. Radio frequencies around 2 GHz were selected, as these are the most important frequency bands for 3. generation mobile systems, even though the envisaged approach supports radio planning for GSM 900 and WLAN systems.</p><p>A novel approach to the modeling of scattering from random rough surfaces for 3D channel modeling was developed. This amplitude/phase model is simple and accurate compared to conventional models. It makes no inherent assumption about the degree of roughness, making it suited to model all surfaces. The model outperforms the conventional models Plane surface, SPM, Kirchoff and Oren with respect to accuracy by 1.5 to 10 dB depending on the degree of roughness.</p><p>An experimental methodology to characterise random rough surfaces was developed. The work characterised natural surfaces such as asphalt, grass, agriculture, and forest, each of them having a different degree of roughness. Variations due to weather and seasonal changes were taken into account. Typical surface height variations estimated were 10 mm for asphalt, 25 mm for grass, 100 mm for ploughed field and 500 mm for forest. Snow reduced the apparent roughness of ploughed field by 50 %, water on grass increased the reflection coefficient by 50 %.</p><p>An analysis of the implications of the results on 3D channel modeling was performed using a demonstration model. The analysis included a comparison between 2D and 3D model prediction for different area types and land use classes. Also the prediction sensitivity to seasonal and weather variations and model parameter variations were inspected. A 3D model is necessary when the 2D component is attenuated more than typically 15 dB relative to free space, depending on area and land usage. In the network planning example Lillehammer (N) this attenuation of at least 15 dB existed in 40 % of all locations. Weather and seasonal variations may change the mean predicted value by up to 4-5 dB.</p>
10

Optimal Bit and Power Constrained Filter Banks

Hjørungnes, Are January 2000 (has links)
In this dissertation, two filter banks optimization problems are studied. The first problem is the optimization of filter banks used in a subband coder under a bit constraint. In the second problem, a multiple input multiple output communication system is optimized under a power constraint. Three different cases on the filter lengths are considered: unconstrained length filter banks, transforms, and finite impulse response filter banks with arbitrary given filter lengths. In source coding and multiple input multiple output communication systems, transforms and filter banks are used to decompose the source in order to generate samples that are partly decorrelated. Then, they are more suitable for source coding or transmission over a channel than the original source samples. Most transformers and filter banks that are studies in the literature have the perfect reconstruction property. In this dissertation, the perfect reconstruction condition is relaxed, so that the transforms and filter banks are allowed to belong to larger sets, which contain perfect reconstruction transforms and filter banks as subsets. Jointly optimal analysis and synthesis filter banks and transforms are proposed under the bit and power constraints for all the three filter length cases. For a given number of bits used in the quantizers or for a given channel with a maximum allowable input power, the analysis and synthesis transforms and filter banks are jointly optimized such that the mean square error between the original and decoded signal is minimized. Analytical expressions are obtained for unconstrained length filter banks and transforms, and an iterative numerical algorithm is proposed in the finite impulse response filter bank case. The channel in the communication problem is modelled as a known multiple input multiple output transfer matrix with signal independent additive vector noise having known second order statistics. A pre- and postprocessor containing modulation is introduced in the unconstrained length filter bank system with a power constraint. It is shown that the performance of this system is the same as the performance of the power constrained transform coder system when the dimensions of the latter system approach infinity. In the source coding problem, the results are obtained with different quantization models. In the simplest model, the subband quantizers are modelled as additive white signal independent noise sources. The proposed unconstrained length filter banks, and it is shown that the proposed transform has better performance than the Karhunen-Loève transform. Also, the proposed transform coder has the same performance as a transform coder using a reduced rank Karhunen- Loève analysis transform with jointy optimal bit allocation and Wiener synthesis transform. The proposed finite impulse response filter banks have at least as good theoretical rate distortion performance as the perfect reconstruction filter banks and the finite impulse response Wiener filter banks used in the comparison. A practical coding system is introduced where the coding of the subband signals is performed by uniform threshold quantizers using the centroids as representation levels. It is shown that there is a mismatch between the theoretical and practical results. Three methods for removing this mismatch are introduced. In the two first methods, the filter banks them selves are unchanged, but the coding method of the subband signals is changed. In the first of these two methods, quantizers are derived such that the additive coding noise and subband signals are uncorrelated. Subtractive dithering is the second method used for coding of the subband signals. In the third method, a signal dependent colored noise model is introduced, and this model is used to redesign the filter banks. In all three methods, good correspondence is achieved between the theoretical and practical results, and comparable or better practical rate distortion performance is achieved by the proposed methods compared to systems using perfect reconstruction filter banks and finite impulse response Wiener synthesis filter banks. Finally, conditions for when finite impulse response filter banks are optimal are derived.

Page generated in 0.0628 seconds