• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 794
  • 129
  • 54
  • 8
  • 4
  • 1
  • Tagged with
  • 990
  • 557
  • 261
  • 232
  • 212
  • 199
  • 198
  • 138
  • 125
  • 101
  • 100
  • 91
  • 81
  • 70
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Towards Plug-and-Play Services : Design and Validation Using Roles

Floch, Jacqueline January 2003 (has links)
<p>Today telecommunication service users expect to access a similar set of services independently of what network they happen to use, they expect services to adapt to new surroundings and contexts as they move around, and they expect to get access to new and useful services as soon as they become available. Building services operating satisfactorily under such requirements poses new challenges and requires new solutions and new engineering methods for rapid service development and deployment.</p><p>The PaP project at NTNU was initiated in order to define a framework for service development and execution that supports the dynamic composition of services using Plug-and- Play techniques. By dynamic composition, we mean that services and service components can be designed separately, and then composed at run-time. In the frame of the PaP project, this doctoral work has addressed two issues: the design and the validation of Plugand- Play services.</p><p>Service design is complex. In a PaP context, this complexity increases further as services are designed to be dynamically adapted to changing contexts. A design approach based on service roles is proposed, and role composition is proposed as a means to achieve adaptability.</p><p>We model service role behaviours and their composition using state machines that interact asynchronously. Describing system behaviours in terms of state machines has proven to be of great value, and is widely adopted in most teleservice engineering approaches. We favour the use of the modelling language SDL because of its formal semantics that enables an unambiguous interpretation of the system specification. However, our design and validation results are not bound to SDL. They may be applied on systems specified using other modelling languages that support state machines, as for example UML.</p><p>In our work, we investigate how SDL-2000 can be used to model composition. Differently from process algebra, SDL and other approaches using state machines do not explicitly define composition operators. By defining design patterns and rules for expressing composition in SDL, this thesis contributes to promote using SDL as a behaviour composition language. SDL is not only a language for the modelling of state machines. SDL-2000 has newly been released, and to the best of our knowledge little experimentation using the new concepts of SDL-2000 has been done. We propose original and innovative employment of some of the newly introduced SDL concepts, that should be of interest for the SDL community.</p><p>Dynamic composition of services requires incremental and compositional validation methods. It should be possible to validate components introduced in a system at run-time, and to restrict the analysis to the parts of the system affected by the dynamic modifications. This thesis proposes a validation approach suited for dynamic service composition. Validation analysis is complex and requires simplification. Two simplification schemes, projection and incrementation, are proposed. Projection and incrementation are two main contributions of this thesis:</p><p>• A projection is a simplified system description or viewpoint that emphasises some system properties while hiding some others. Rather than analysing the whole system, projections are analysed. In our work, the projection only retains the aspects significant for the purpose of validation of associations between service roles.</p><p>• Incrementation means that validation can be applied incrementally. The proposed validation approach is tightly integrated with the composition of service roles. Elementary roles are first validated, and then the roles composed of elementary roles, and then the composite of composites. In that way, the proposed validation techniques enable us to validate parts of systems and the composition of system parts.</p><p>Another contribution of this thesis are design rules that enable the designer to avoid making certain dynamic errors and to develop well-formed state machines. Error search is not postponed until after the specification phase: ambiguous and conflicting behaviours can be identified already at design time.</p><p>The projection of service roles lead to interface descriptions that are described using state machines. In that way, our interface descriptions overcome the limitations of static object interfaces. In our work, the interface descriptions represent the dynamic behaviour of interactions between service roles. It is also possible to determine required interfaces from provided interfaces. The results of this thesis should then be of interest for the research related to the definition of semantic interfaces.</p><p>A major concern in our work has been to provide validation techniques that are easy to understand and apply. Current verification and validation techniques often require high competence and knowledge in formal modelling and reasoning on the part of the system developer, and their use in the software industry is rather moderate. We believe that our approach, although thoroughly justified, remains easy to understand and use. In that way, the applicability of the proposed approach is wider than the context of dynamic validation. It should also be of interest for the validation of static systems. </p>
12

Optimal Bit and Power Constrained Filter Banks

Hjørungnes, Are January 2000 (has links)
<p>In this dissertation, two filter banks optimization problems are studied. The first problem is the optimization of filter banks used in a subband coder under a bit constraint. In the second problem, a multiple input multiple output communication system is optimized under a power constraint. Three different cases on the filter lengths are considered: unconstrained length filter banks, transforms, and finite impulse response filter banks with arbitrary given filter lengths. </p><p>In source coding and multiple input multiple output communication systems, transforms and filter banks are used to decompose the source in order to generate samples that are partly decorrelated. Then, they are more suitable for source coding or transmission over a channel than the original source samples. Most transformers and filter banks that are studies in the literature have the perfect reconstruction property. In this dissertation, the perfect reconstruction condition is relaxed, so that the transforms and filter banks are allowed to belong to larger sets, which contain perfect reconstruction transforms and filter banks as subsets. </p><p>Jointly optimal analysis and synthesis filter banks and transforms are proposed under the bit and power constraints for all the three filter length cases. For a given number of bits used in the quantizers or for a given channel with a maximum allowable input power, the analysis and synthesis transforms and filter banks are jointly optimized such that the mean square error between the original and decoded signal is minimized. Analytical expressions are obtained for unconstrained length filter banks and transforms, and an iterative numerical algorithm is proposed in the finite impulse response filter bank case. </p><p>The channel in the communication problem is modelled as a known multiple input multiple output transfer matrix with signal independent additive vector noise having known second order statistics. A pre- and postprocessor containing modulation is introduced in the unconstrained length filter bank system with a power constraint. It is shown that the performance of this system is the same as the performance of the power constrained transform coder system when the dimensions of the latter system approach infinity.</p><p>In the source coding problem, the results are obtained with different quantization models. In the simplest model, the subband quantizers are modelled as additive white signal independent noise sources. The proposed unconstrained length filter banks, and it is shown that the proposed transform has better performance than the Karhunen-Loève transform. Also, the proposed transform coder has the same performance as a transform coder using a reduced rank Karhunen- Loève analysis transform with jointy optimal bit allocation and Wiener synthesis transform. The proposed finite impulse response filter banks have at least as good theoretical rate distortion performance as the perfect reconstruction filter banks and the finite impulse response Wiener filter banks used in the comparison. </p><p>A practical coding system is introduced where the coding of the subband signals is performed by uniform threshold quantizers using the centroids as representation levels. It is shown that there is a mismatch between the theoretical and practical results. Three methods for removing this mismatch are introduced. In the two first methods, the filter banks them selves are unchanged, but the coding method of the subband signals is changed. In the first of these two methods, quantizers are derived such that the additive coding noise and subband signals are uncorrelated. Subtractive dithering is the second method used for coding of the subband signals. In the third method, a signal dependent colored noise model is introduced, and this model is used to redesign the filter banks. In all three methods, good correspondence is achieved between the theoretical and practical results, and comparable or better practical rate distortion performance is achieved by the proposed methods compared to systems using perfect reconstruction filter banks and finite impulse response Wiener synthesis filter banks. </p><p>Finally, conditions for when finite impulse response filter banks are optimal are derived. </p>
13

Practical Thermal and Electrical Parameter Extraction Methods for Modelling HBT´s and their Applications in Power Amplifiers

Olavsbråten, Morten January 2003 (has links)
<p>A new practical technique for estimating the junction temperature and the thermal resistance of an HBT was developed. The technique estimates an interval for the junction temperature. The main assumption in the new technique is that the junction temperature can be calculated from three separate phenomena: The thermal conduction of the substrate, the thermal conduction of the metal connecting the emitter to the via holes, the effects of the via holes on the substrate temperature. The main features of the new technique are: The junction temperature and the thermal resistance are calculated from a few physical properties and the layout of the transistors, the only required software tool is MATLAB, the calculation time is very short compared to a full 3D thermal simulation, and the technique is easy to use for the circuit designer. The new technique shows good accuracy, when applied to several InGaP/GaAs HBT’s from Caswell Technology, and compared to the results from other methods. All the results fall well within the estimated junction temperature intervals from the new technique. A practical parameter extraction method for the VBIC model was developed. This method is the first published practical parameter extraction technique for the VBIC model used on an InGaP/GaAs HBT, as far as the author knows. The main features of the extraction method are: Only a few common measurements are needed, it is easy and practical to use for the circuit designer, it has good accuracy with only a few iterations. No expensive and specialized parameter extraction software is required. The only software needed is a circuit simulator. The method includes the extraction of the bias dependent forward transit time. The extraction method was evaluated on a single finger, 1x40, InGaP/GaAs HBT from Caswell Technology. Only four iterations were required to fit the measurements very well. There is less than 1 % error in both the Ic-Vce and Vbe-Vce plots. The maximum magnitude and phase error in the whole frequency range up to 40GHz are less than 1.5 dB and 15 degrees. The method is also evaluated on a SiGe HBT. Models for a single finger and an 8-finger transistor were extracted. All the dc characteristics of the modeled transistors have less than 3.5 % error. Some amplitude and phase errors are observed in the s-parameters. The errors are caused by uncertainties in the calibration due to a worn calibration substrate, high temperature drift during the measurements, and uncertainties in the physical dimensions/properties caused by lack of information from the foundry. Overall, the extracted models fit the measurements quite well. A very linear class A power amplifier has been designed using the InGaP/GaAs HBT’s from Caswell technology. The thermal junction estimation technique developed has been used to make a very good thermal layout of the power amplifier. The estimated average junction temperature is 98.6°C above the ambient temperature of 45°C, with a total dissipated power of 6.4W. The maximum junction temperature difference between the transistor fingers is less than 11°C. The PA was constructed with a ‘bus bar’ power combinder at both input and output, and optimized for maximum gain with a 10% bandwidth. The PA had a maximum output power of 34.8dBm, a 1dB compression point of 34.5dBm, a the third order intercept point of 49.9dBm, and a PAE of 27.2% at 33dBm output power. </p><p> </p>
14

Optimal Bit and Power Constrained Filter Banks

Hjørungnes, Are January 2000 (has links)
In this dissertation, two filter banks optimization problems are studied. The first problem is the optimization of filter banks used in a subband coder under a bit constraint. In the second problem, a multiple input multiple output communication system is optimized under a power constraint. Three different cases on the filter lengths are considered: unconstrained length filter banks, transforms, and finite impulse response filter banks with arbitrary given filter lengths. In source coding and multiple input multiple output communication systems, transforms and filter banks are used to decompose the source in order to generate samples that are partly decorrelated. Then, they are more suitable for source coding or transmission over a channel than the original source samples. Most transformers and filter banks that are studies in the literature have the perfect reconstruction property. In this dissertation, the perfect reconstruction condition is relaxed, so that the transforms and filter banks are allowed to belong to larger sets, which contain perfect reconstruction transforms and filter banks as subsets. Jointly optimal analysis and synthesis filter banks and transforms are proposed under the bit and power constraints for all the three filter length cases. For a given number of bits used in the quantizers or for a given channel with a maximum allowable input power, the analysis and synthesis transforms and filter banks are jointly optimized such that the mean square error between the original and decoded signal is minimized. Analytical expressions are obtained for unconstrained length filter banks and transforms, and an iterative numerical algorithm is proposed in the finite impulse response filter bank case. The channel in the communication problem is modelled as a known multiple input multiple output transfer matrix with signal independent additive vector noise having known second order statistics. A pre- and postprocessor containing modulation is introduced in the unconstrained length filter bank system with a power constraint. It is shown that the performance of this system is the same as the performance of the power constrained transform coder system when the dimensions of the latter system approach infinity. In the source coding problem, the results are obtained with different quantization models. In the simplest model, the subband quantizers are modelled as additive white signal independent noise sources. The proposed unconstrained length filter banks, and it is shown that the proposed transform has better performance than the Karhunen-Loève transform. Also, the proposed transform coder has the same performance as a transform coder using a reduced rank Karhunen- Loève analysis transform with jointy optimal bit allocation and Wiener synthesis transform. The proposed finite impulse response filter banks have at least as good theoretical rate distortion performance as the perfect reconstruction filter banks and the finite impulse response Wiener filter banks used in the comparison. A practical coding system is introduced where the coding of the subband signals is performed by uniform threshold quantizers using the centroids as representation levels. It is shown that there is a mismatch between the theoretical and practical results. Three methods for removing this mismatch are introduced. In the two first methods, the filter banks them selves are unchanged, but the coding method of the subband signals is changed. In the first of these two methods, quantizers are derived such that the additive coding noise and subband signals are uncorrelated. Subtractive dithering is the second method used for coding of the subband signals. In the third method, a signal dependent colored noise model is introduced, and this model is used to redesign the filter banks. In all three methods, good correspondence is achieved between the theoretical and practical results, and comparable or better practical rate distortion performance is achieved by the proposed methods compared to systems using perfect reconstruction filter banks and finite impulse response Wiener synthesis filter banks. Finally, conditions for when finite impulse response filter banks are optimal are derived.
15

Practical Thermal and Electrical Parameter Extraction Methods for Modelling HBT´s and their Applications in Power Amplifiers

Olavsbråten, Morten January 2003 (has links)
A new practical technique for estimating the junction temperature and the thermal resistance of an HBT was developed. The technique estimates an interval for the junction temperature. The main assumption in the new technique is that the junction temperature can be calculated from three separate phenomena: The thermal conduction of the substrate, the thermal conduction of the metal connecting the emitter to the via holes, the effects of the via holes on the substrate temperature. The main features of the new technique are: The junction temperature and the thermal resistance are calculated from a few physical properties and the layout of the transistors, the only required software tool is MATLAB, the calculation time is very short compared to a full 3D thermal simulation, and the technique is easy to use for the circuit designer. The new technique shows good accuracy, when applied to several InGaP/GaAs HBT’s from Caswell Technology, and compared to the results from other methods. All the results fall well within the estimated junction temperature intervals from the new technique. A practical parameter extraction method for the VBIC model was developed. This method is the first published practical parameter extraction technique for the VBIC model used on an InGaP/GaAs HBT, as far as the author knows. The main features of the extraction method are: Only a few common measurements are needed, it is easy and practical to use for the circuit designer, it has good accuracy with only a few iterations. No expensive and specialized parameter extraction software is required. The only software needed is a circuit simulator. The method includes the extraction of the bias dependent forward transit time. The extraction method was evaluated on a single finger, 1x40, InGaP/GaAs HBT from Caswell Technology. Only four iterations were required to fit the measurements very well. There is less than 1 % error in both the Ic-Vce and Vbe-Vce plots. The maximum magnitude and phase error in the whole frequency range up to 40GHz are less than 1.5 dB and 15 degrees. The method is also evaluated on a SiGe HBT. Models for a single finger and an 8-finger transistor were extracted. All the dc characteristics of the modeled transistors have less than 3.5 % error. Some amplitude and phase errors are observed in the s-parameters. The errors are caused by uncertainties in the calibration due to a worn calibration substrate, high temperature drift during the measurements, and uncertainties in the physical dimensions/properties caused by lack of information from the foundry. Overall, the extracted models fit the measurements quite well. A very linear class A power amplifier has been designed using the InGaP/GaAs HBT’s from Caswell technology. The thermal junction estimation technique developed has been used to make a very good thermal layout of the power amplifier. The estimated average junction temperature is 98.6°C above the ambient temperature of 45°C, with a total dissipated power of 6.4W. The maximum junction temperature difference between the transistor fingers is less than 11°C. The PA was constructed with a ‘bus bar’ power combinder at both input and output, and optimized for maximum gain with a 10% bandwidth. The PA had a maximum output power of 34.8dBm, a 1dB compression point of 34.5dBm, a the third order intercept point of 49.9dBm, and a PAE of 27.2% at 33dBm output power.
16

Towards Plug-and-Play Services : Design and Validation Using Roles

Floch, Jacqueline January 2003 (has links)
Today telecommunication service users expect to access a similar set of services independently of what network they happen to use, they expect services to adapt to new surroundings and contexts as they move around, and they expect to get access to new and useful services as soon as they become available. Building services operating satisfactorily under such requirements poses new challenges and requires new solutions and new engineering methods for rapid service development and deployment. The PaP project at NTNU was initiated in order to define a framework for service development and execution that supports the dynamic composition of services using Plug-and- Play techniques. By dynamic composition, we mean that services and service components can be designed separately, and then composed at run-time. In the frame of the PaP project, this doctoral work has addressed two issues: the design and the validation of Plugand- Play services. Service design is complex. In a PaP context, this complexity increases further as services are designed to be dynamically adapted to changing contexts. A design approach based on service roles is proposed, and role composition is proposed as a means to achieve adaptability. We model service role behaviours and their composition using state machines that interact asynchronously. Describing system behaviours in terms of state machines has proven to be of great value, and is widely adopted in most teleservice engineering approaches. We favour the use of the modelling language SDL because of its formal semantics that enables an unambiguous interpretation of the system specification. However, our design and validation results are not bound to SDL. They may be applied on systems specified using other modelling languages that support state machines, as for example UML. In our work, we investigate how SDL-2000 can be used to model composition. Differently from process algebra, SDL and other approaches using state machines do not explicitly define composition operators. By defining design patterns and rules for expressing composition in SDL, this thesis contributes to promote using SDL as a behaviour composition language. SDL is not only a language for the modelling of state machines. SDL-2000 has newly been released, and to the best of our knowledge little experimentation using the new concepts of SDL-2000 has been done. We propose original and innovative employment of some of the newly introduced SDL concepts, that should be of interest for the SDL community. Dynamic composition of services requires incremental and compositional validation methods. It should be possible to validate components introduced in a system at run-time, and to restrict the analysis to the parts of the system affected by the dynamic modifications. This thesis proposes a validation approach suited for dynamic service composition. Validation analysis is complex and requires simplification. Two simplification schemes, projection and incrementation, are proposed. Projection and incrementation are two main contributions of this thesis: • A projection is a simplified system description or viewpoint that emphasises some system properties while hiding some others. Rather than analysing the whole system, projections are analysed. In our work, the projection only retains the aspects significant for the purpose of validation of associations between service roles. • Incrementation means that validation can be applied incrementally. The proposed validation approach is tightly integrated with the composition of service roles. Elementary roles are first validated, and then the roles composed of elementary roles, and then the composite of composites. In that way, the proposed validation techniques enable us to validate parts of systems and the composition of system parts. Another contribution of this thesis are design rules that enable the designer to avoid making certain dynamic errors and to develop well-formed state machines. Error search is not postponed until after the specification phase: ambiguous and conflicting behaviours can be identified already at design time. The projection of service roles lead to interface descriptions that are described using state machines. In that way, our interface descriptions overcome the limitations of static object interfaces. In our work, the interface descriptions represent the dynamic behaviour of interactions between service roles. It is also possible to determine required interfaces from provided interfaces. The results of this thesis should then be of interest for the research related to the definition of semantic interfaces. A major concern in our work has been to provide validation techniques that are easy to understand and apply. Current verification and validation techniques often require high competence and knowledge in formal modelling and reasoning on the part of the system developer, and their use in the software industry is rather moderate. We believe that our approach, although thoroughly justified, remains easy to understand and use. In that way, the applicability of the proposed approach is wider than the context of dynamic validation. It should also be of interest for the validation of static systems.
17

Available-Bandwidth Estimation in Packet-Switched Communication Networks

Bergfeldt, Erik January 2010 (has links)
This thesis presents novel methods that are able to perform real-time estimation of the available bandwidth of a network path. In networks such as the Internet, knowledge of bandwidth characteristics is of great significance in, e.g., network monitoring, admission control, and audio/video streaming. The term bandwidth describes the amount of information a network can deliver per unit of time. For network end users, it is only feasible to obtain bandwidth properties of a path by actively probing the network with probe packets, and to perform estimation based on received measurements. In this thesis, two active-probing based methods for real-time available-bandwidth estimation are presented and evaluated. The first method, BART (Bandwidth Available in Real-Time), uses Kalman filtering for the analysis of received probe packets. BART is examined analytically and through experiments which are carried out in wired and wireless laboratory networks as well as over the Internet and commercial mobile broadband networks. The opportunity of tuning the Kalman filter and enhancing the performance by introducing change detection are investigated in more detail. Generally, the results show accurate estimation with only modest computational efforts and minor injections of probe packets. However, it is possible to identify weaknesses of BART, and a summary of these as well as general problems and challenges in the field of available-bandwidth estimation are laid out in the thesis. The second method, E-MAP (Expectation-Maximization Active Probing), is designed to overcome some of these issues. E-MAP modifies the active-probing scheme of BART and utilizes the expectation-maximization algorithm before filtering is used to generate a bandwidth estimate. Overall, this thesis shows that in many cases it is achievable to obtain efficient and reliable real-time estimation of available bandwidth by using light-weight analysis techniques and negligible probe-traffic overhead. Hence, this opens up exciting new possibilities for a range of applications and services in communication networks.
18

Design of Reliable Communication Solutions for Wireless Sensor Networks : Managing Interference in Unlicensed Bands

Stabellini, Luca January 2009 (has links)
<p>Recent surveys conducted in the context of industrial automation have outlined that reliability concerns represent today one of the major barriers to the diffusion of wireless communications for sensing and control applications: this limits the potential of wireless sensor networks and slows down the adoption of this new technology. Overcoming these limitations requires that awareness on the causes of unreliability and on the possible solutions to this problem is created. With this respect, the main factor responsible for the perceived unreliability is radio interference: low-power communications of sensor nodes are in fact very sensitive to bad channel conditions and can be easily corrupted by transmissions of other co-located devices. In this thesis we investigate different techniques that can be exploited to avoid interference or mitigate its effects.We first consider interference avoidance through dynamic spectrum access: more specifically we focus on the idea of channel surfing and design algorithms that allow sensor nodes to identify interfered channels, discover their neighbors and maintain a connected topology in multi-channel environments. Our investigation shows that detecting and thus avoiding interference is a feasible task that can be performed by complexity and power constrained devices. In the context of spectrum sharing, we further consider the case of networked estimation and aim at quantifying the effects of intranetwork interference, induced by contention-based medium access, over the performance of an estimation system. We show that by choosing in an opportune manner their probability of transmitting, sensors belonging to a networked control system can minimize the average distortion of state estimates.In the second part of this thesis we focus on frequency hopping techniques and propose a new adaptive hopping algorithm. This implements a new approach for frequency hopping: in particular rather than aiming at removing bad channels from the adopted hopset our algorithm uses all the available frequencies but with probabilities that depend on the experienced channel conditions. Our performance evaluation shows that this approach outperforms traditional frequency hopping schemes as well as the adaptive implementation included in the IEEE 802.15.1 radio standard leading to a lower packet error rate.Finally, we consider the problem of sensor networks reprogramming and propose a way for ingineering a coding solution based on fountain codes and suitable for this challenging task. Using an original genetic approach we optimize the degree distribution of the used codes so as to achieve both low overhead and low decoding complexity. We further engineer the implementation of fountain codes in order to allow the recovery of corrupted information through overhearing and improve the resilience of the considered reprogramming protocol to channel errors.</p>
19

On distributed coding for relay channels

Si, Zhongwei January 2010 (has links)
<p>Cooperative transmission is considered to be a key-technique for increasing the robustness, the efficiency, or the coverage of wireless communication networks. The basic concept is that the information transmission from a sender to a receiver can be aided by one or several relay nodes in a cooperative manner under constraints on power, complexity or delay.</p><p>The main part of this thesis is devoted to studies on practical realizations of cooperative communication systems. Coding solutions that implement the decode-and-forward protocol in three-node relay channels are proposed by employing convolutional and Turbo codes. Distributed Turbo coding (DTC) was the first technique to bring parallel code concatenation into relay networks. To complement the research on parallel concatenated codes, we propose distributed serially concatenated codes (DSCCs) which provide a better error-floor performance and an increased robustness compared with DTCs. Thereafter, we present a flexible distributed code design which can be adapted to the channel conditions in a simple way. For both the cases with perfect and limited channel-state information, the adaptive coding scheme outperforms static codes, like DTCs and DSCCs, in terms of transmission rate and application range.</p><p>The aforementioned implementations of relaying are based on blockwise decoding and re-encoding at the relay. In some applications, however, these techniques are not feasible due to limited processing and storage capabilities of the relay nodes. Therefore, we propose to combine instantaneous relaying strategies with bit-interleaved coded modulation. A significant gain can be obtained by using sawtooth and constellation rearrangement relaying with optimized bit-to-symbol mappings compared with conventional instantaneous relaying strategies and compared with standard mappings optimized for point-to-point communications. Both the parameters of the instantaneous relaying schemes and the bit-to-symbol mappings are optimized to maximize mutual information.</p> / QC20100607
20

Quality aspects of internet telephony

Marsh, Ian January 2009 (has links)
Internet telephony has had a tremendous impact on how people communicate.Many now maintain contact using some form of Internet telephony.Therefore the motivation for this work has been to address the quality aspectsof real-world Internet telephony for both fixed and wireless telecommunication.The focus has been on the quality aspects of voice communication,since poor quality leads often to user dissatisfaction. The scope of the workhas been broad in order to address the main factors within IP-based voicecommunication. The first four chapters of this dissertation constitute the backgroundmaterial. The first chapter outlines where Internet telephony is deployedtoday. It also motivates the topics and techniques used in this research.The second chapter provides the background on Internet telephony includingsignalling, speech coding and voice Internetworking. The third chapterfocuses solely on quality measures for packetised voice systems and finallythe fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions.It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellularinfrastructure. We then consider the Internet backbone where most of ourwork is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements.A second contribution of this work is a suite of software tools designed to a certain voice quality in IP networks. Some of these tools are in use within commercial systems today. / QC 20100802

Page generated in 0.507 seconds