Spelling suggestions: "subject:"atransmission systems"" "subject:"cotransmission systems""
371 |
Concatenated space-time codes in Rayleigh fading channels.Byers, Geoffrey James. 02 November 2011 (has links)
The rapid growth of wireless subscribers and services as well as the increased use of internet
services, suggest that wireless internet access will increase rapidly over the next few years.
This will require the provision of high data rate wireless communication services. However
the problem of a limited and expensive radio spectrum coupled with the problem of the
wireless fading channel makes it difficult to provide these services. For these reasons, the
research area of high data rate, bandwidth efficient and reliable wireless communications
is currently receiving much attention.
Concatenated codes are a class of forward error correction codes which consist of two or
more constituent codes. These codes achieve reliable communications very close to the
Shannon limit provided that sufficient diversity, such as temporal or spatial diversity, is
available. Space-time trellis codes (STTCs) merge channel coding and transmit antenna
diversity to improve system capacity and performance. The main focus of this dissertation
is on STTCs and concatenated STTCs in quasi-static and rapid Rayleigh fading channels.
Analytical bounds are useful in determining the behaviour of a code at high SNRs where
it becomes difficult to generate simulation results. A novel method is proposed to analyse
the performance of STTCs and the accuracy of this analysis is compared to simulation
results where it is shown to closely approximate system performance.
The field of concatenated STTCs has already received much attention and has shown
improved performance over conventional STTCs. It was recently shown that double concatenated
convolutional codes in AWGN channels outperform simple concatenated codes.
Motivated by this, two double concatenated STTC structures are proposed and their performance is compared to that of a simple concatenated STTCs. It is shown that double
concatenated STTCs outperform simple concatenated STTCs in rapid Rayleigh fading
channels. An analytical model for this system in rapid fading is developed which combines
the proposed analytical method for STTCs with existing analytical techniques for
concatenated convolutional codes.
The final part of this dissertation considers a direct-sequencejslow-frequency-hopped (DSj
SFH) code division multiple access (CDMA) system with turbo coding and multiple transmit
antennas. The system model is modified to include a more realistic, time correlated
Rayleigh fading channel and the use of side information is incorporated to improve the
performance of the turbo decoder. Simulation results are presented for this system and it
is shown that the use of transmit antenna diversity and side information can be used to improve system performance. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2002.
|
372 |
Performance of the transmission control protocol (TCP) over wireless with quality of service.Walingo, Tom. January 2001 (has links)
The Transmission Control Protocol (TCP) is the most widely used transport protocol in
the Internet. TCP is a reliable transport protocol that is tuned to perform well in wired
networks where packet losses are mainly due to congestion. Wireless channels are
characterized by losses due to transmission errors and handoffs. TCP interprets these
losses as congestion and invokes congestion control mechanisms resulting in degradation
of performance. TCP is usually layered over the Internet protocol (lP) at the network
layer. JP is not reliable and does not provide for any Quality of Service (QoS). The
Internet Engineering Task Force (IETF) has provided two techniques for providing QoS
in the Internet. These include Integrated Services (lntServ) and Differentiated Services
(DiffServ). IntServ provides flow based quality of service and thus it is not scalable on
connections with large flows. DiffServ has grown in popularity since it is scalable. A
packet in a DiffServ domain is classified into a class of service according to its contract
profile and treated differently by its class. To provide end-to-end QoS there is a strong
interaction between the transport protocol and the network protocol. In this dissertation
we consider the performance of the TCP over a wireless channel. We study whether the
current TCP protocols can deliver the desired quality of service faced with the challenges
they have on wireless channel. The dissertation discusses the methods of providing for
QoS in the Internet. We derive an analytical model for TCP protocol. It is extended to
cater for the wireless channel and then further differentiated services. The model is
shown to be accurate when compared to simulation. We then conclude by deducing to
what degree you can provide the desired QoS with TCP on a wireless channel. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
|
373 |
Network compression via network memory: realization principles and coding algorithmsSardari, Mohsen 13 January 2014 (has links)
The objective of this dissertation is to investigate both the theoretical and practical aspects of redundancy elimination methods in data networks. Redundancy elimination provides a powerful technique to improve the efficiency of network links in the face of redundant data. In this work, the concept of network compression is introduced to address the redundancy elimination problem. Network compression aspires to exploit the statistical correlation in data to better suppress redundancy. In a nutshell, network compression enables memorization of data packets in some nodes in the network. These nodes can learn the statistics of the information source generating the packets which can then be used toward reducing the length of codewords describing the packets emitted by the source. Memory elements facilitate the compression of individual packets using the side-information obtained from memorized data which is called ``memory-assisted compression''. Network compression improves upon de-duplication methods that only remove duplicate strings from flows.
The first part of the work includes the design and analysis of practical algorithms for memory-assisted compression. These algorithms are designed based on the theoretical foundation proposed in our group by Beirami et al. The performance of these algorithms are compared to
the existing compression techniques when the algorithms are tested on the real Internet traffic traces. Then, novel clustering techniques are proposed which can identify various information sources and apply the compression accordingly. This approach results in superior performance for memory-assisted compression when the input data comprises sequences generated by various and unrelated information sources.
In the second part of the work the application of memory-assisted compression in wired networks is investigated. In particular, networks with random and power-law graphs are studied. Memory-assisted compression is applied in these graphs and the routing problem for compressed flows is addressed. Furthermore, the network-wide gain of the memorization is defined and its scaling behavior versus the number of memory nodes is characterized. In particular, through our analysis on these graphs, we show that non-vanishing network-wide gain of memorization is obtained even when the number of memory units is a tiny fraction of the total number of nodes in the network.
In the third part of the work the application of memory-assisted compression in wireless networks is studied. For wireless networks, a novel network compression approach via memory-enabled helpers is proposed. Helpers provide side-information that is obtained via overhearing.
The performance of network compression in wireless networks is characterized and the following benefits are demonstrated: offloading the wireless gateway, increasing the maximum number of mobile nodes served by the gateway, reducing the average packet delay, and improving the overall throughput in the network.
Furthermore, the effect of wireless channel loss on the performance of the network compression scheme is studied. Finally, the performance of memory-assisted compression working in tandem with de-duplication is investigated and simulation results on real data traces from wireless users are provided.
|
374 |
CMOS RF SOC Transmitter Front-End, Power Management and Digital Analog InterfaceLeung, Matthew Chung-Hin 19 May 2008 (has links)
With the growing trend of wireless electronics, frequency spectrum is crowded with different applications. High data transfer rate solutions that operate in license-exempt frequency spectrum range are sought. The most promising candidate is the 60 GHz multi-giga bit transfer rate millimeter wave circuit. In order to provide a cost-effective solution, circuits designed in CMOS are implemented in a single SOC.
In this work, a modeling technique created in Cadence shows an error of less than 3dB in magnitude and 5 degree in phase for a single transistor. Additionally, less than 3dB error of power performance for the PA is also verified. At the same time, layout strategies required for millimeter wave front-end circuits are investigated. All of these combined techniques help the design converge to one simulation platform for system level simulation.
Another aspect enabling the design as a single SOC lies in integration. In order to integrate digital and analog circuits together, necessary peripheral circuits must be designed. An on-chip voltage regulator, which steps down the analog power supply voltage and is compatible with digital circuits, has been designed and has demonstrated an efficiency of 65 percent with the specific area constraint. The overall output voltage ripple generated is about 2 percent.
With the necessary power supply voltage, gate voltage bias circuit designs have been illustrated. They provide feasible solutions in terms of area and power consumption. Temperature and power supply sensitivities are minimized in first two designs. Process variation is further compensated in the third design. The third design demonstrates a powerful solution that each aspect of variations is well within 10%.
As the DC conditions are achieved on-chip for both the digital and analog circuits, digital and analog circuits must be connected together with a DAC. A high speed DAC is designed with special layout techniques. It is verified that the DAC can operate at a speed higher than 3 Gbps from the pulse-shaping FIR filter measurement result.
With all of these integrated elements and modeling techniques, a high data transfer rate CMOS RF SOC operating at 60 GHz is possible.
|
375 |
Time-sensitive communication of digital images, with applications in telepathologyKhire, Sourabh Mohan 08 July 2009 (has links)
Telepathology is defined as the practice of pathology at a distance using video imaging and telecommunications. In this thesis we address the two main technology challenges in implementing telepathology, viz. compression and transmission of digital pathology images.
One of the barriers to telepathology is the availability and the affordability of high bandwidth communication resources. High bandwidth links are required because of the large size of the uncompressed digital pathology images. For efficient utilization of available bandwidth, these images need to be compressed. However aggressive image compression may introduce objectionable artifacts and result in an inaccurate diagnosis. This discussion helps us to identify two main design challenges in implementing telepathology,
1. Compression: There is a need to develop or select an appropriate image compression algorithm and an image quality criterion to ensure maximum possible image compression, while ensuring that diagnostic accuracy is not compromised.
2. Transmission: There is a need to develop or select a smart image transmission scheme which can facilitate the transmission of the compressed image to the remote pathologist without violating the specified bandwidth and delay constraints.
We addressed the image compression problem by conducting subjective tests to determine the maximum compression that can be tolerated before the pathology images lose their diagnostic value. We concluded that the diagnostically lossless compression ratio is at least around 5 to 10 times higher than the mathematically lossless compression ratio, which is only about 2:1. We also set up subjective tests to compare the performance of the JPEG and the JPEG 2000 compression algorithms which are commonly used for compression of medical images. We concluded that JPEG 2000 outperforms JPEG at lower bitrates (bits/pixel), but both the algorithms perform equally well at higher bitrates.
We also addressed the issue of image transmission for telepathology by proposing a two-stage transmission scheme, where coarse image information compressed at diagnostically lossless level is sent to the clients at the first stage, and the Region of Interest is transmitted at mathematically lossless compression levels at the second stage, thereby reducing the total image transmission delay.
|
376 |
Improving resource utilization in carrier ethernet technologiesCaro Perez, Luis Fernando 19 January 2010 (has links)
Ethernet está empezando a pasar de las redes de área local a una red de transporte. Sin embargo, como los requisitos de las redes de transporte son más exigentes, la tecnología necesita ser mejorada. Esquemas diseñados para mejorar Ethernet para que cumpla con las necesidades de transporte se pueden categorizar en dos clases. La primera clase mejora solo los componentes de control de Ethernet (Tecnologías basadas en STP), y la segunda clase mejora tanto componentes de control como de encaminamiento de Ethernet (tecnologías basadas en etiquetas). Esta tesis analiza y compara el uso de espacio en las etiquetas de las tecnologias basadas en ellas para garantizar su escalabilidad. La aplicabilidad de las técnicas existentes y los estudios que se pueden utilizar para superar o reducir los problemas de escalabilidad de la etiqueta son evaluados. Además, esta tesis propone un ILP para calcular el óptimo rendimiento de las technologias basadas en STP y las compara con las basadas en etiquetas para ser capaz de determinar, dada una específica situacion, que technologia utilizar. / Ethernet is starting to move from Local area networks to carrier networks. Nevertheless as the requirements of carrier networks are more demanding, the technology needs to be enhanced. Schemes designed for improving Ethernet to match carrier requirements can be categorized in two classes. The first class improves Ethernet control components only (STP based technologies), and the second class improves both Ethernet control and forwarding components (label based forwarding technologies). This thesis analyzes and compares label space usage for the label based forwarding technologies to ensure their scalability. The applicability of existing techniques and studies that can be used to overcome or reduce label scalability issues is evaluated. Additionally this thesis proposes an ILP to calculate optimal performance of STP based approaches and compares them with label based forwarding technologies to be able to determine, given a specific scenario, which approach to use.
|
377 |
Asset Management in Electricity Transmission Enterprises: Factors that affect Asset Management Policies and Practices of Electricity Transmission Enterprises and their Impact on PerformanceCrisp, Jennifer J. January 2004 (has links)
This thesis draws on techniques from Management Science and Artificial Intelligence to explore organisational aspects of asset management in electricity transmission enterprises. In this research, factors that influence policies and practices of asset management within electricity transmission enterprises have been identified, in order to examine their interaction and how they impact the policies, practices and performance of transmission businesses. It has been found that, while there is extensive literature on the economics of transmission regulation and pricing, there is little published research linking the engineering and financial aspects of transmission asset management at a management policy level. To remedy this situation, this investigation has drawn on a wide range of literature, together with expert interviews and personal knowledge of the electricity industry, to construct a conceptual model of asset management with broad applicability across transmission enterprises in different parts of the world. A concise representation of the model has been formulated using a Causal Loop Diagram (CLD). To investigate the interactions between factors of influence it is necessary to implement the model and validate it against known outcomes. However, because of the nature of the data (a mix of numeric and non-numeric data, imprecise, incomplete and often approximate) and complexity and imprecision in the definition of relationships between elements, this problem is intractable to modelling by traditional engineering methodologies. The solution has been to utilise techniques from other disciplines. Two implementations have been explored: a multi-level fuzzy rule-based model and a system dynamics model; they offer different but complementary insights into transmission asset management. Each model shows potential for use by transmission businesses for strategic-level decision support. The research demonstrates the key impact of routine maintenance effectiveness on the condition and performance of transmission system assets. However, performance of the transmission network, is not only related to equipment performance, but is a function of system design and operational aspects, such as loading and load factor. Type and supportiveness of regulation, together with the objectives and corporate culture of the transmission organisation also play roles in promoting various strategies for asset management. The cumulative effect of all these drivers is to produce differences in asset management policies and practices, discernable between individual companies and at a regional level, where similar conditions have applied historically and today.
|
378 |
Characterisation of end-to-end performance for web-based file server respositories /Mascarenhas da Veiga Alves, Manoel Eduardo. January 2001 (has links) (PDF)
Thesis (M.Eng.Sc.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2001. / Bibliography: leaves 128-135.
|
379 |
Future development trends of optical transport network infrastructure an infrastructural framework for metropolitan-based optical transport networks : a field test of a Chinese ISP and a case study of a Chinese electrical power company /Chen, Sheng. January 2006 (has links)
Thesis (M.ICT.)--University of Wollongong, 2006. / Typescript. Includes bibliographical references: leaf 112-121.
|
380 |
Prototyping a peer-to-peer session initiation protocol user agent /Tsietsi, Mosiuoa January 2008 (has links)
Thesis (M.Sc. (Computer Science)) - Rhodes University, 2008
|
Page generated in 0.098 seconds