• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 177
  • 32
  • 2
  • Tagged with
  • 211
  • 211
  • 107
  • 73
  • 50
  • 44
  • 38
  • 38
  • 37
  • 36
  • 31
  • 30
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Distributed Joint Source-Channel Coding For Multiple Access Channels

Rajesh, R 05 1900 (has links)
We consider the transmission of correlated sources over a multiple access channel(MAC). Multiple access channels are important building blocks in many practical communication systems, e.g., local area networks(LAN), cellular systems, wireless multi-hop networks. Thus this topic has been studied for last several decades. One recent motivation is estimating a random field via wireless sensor networks. Often the sensor nodes are densely deployed resulting in correlated observations. These sensor nodes need to transmit their correlated observations to a fusion center which uses this data to estimate the sensed random field. Sensor nodes have limited computational and storage capabilities and very limited energy. Since transmission is very energy intensive, it is important to minimize it. This motivates our problem of energy efficient transmission of correlated sources over a sensor network. Sensor networks are often arranged in a hierarchical fashion. Neighboring nodes can first transmit their data to a cluster head which can further compress information before transmission to the fusion center. The transmission of data from sensor nodes to their cluster-head is usually through a MAC. At the fusion center the underlying physical process is estimated. The main trade-off possible is between the rates at which the sensors send their observations and the distortion incurred in estimation at the fusion center. The availability of side information at the encoders and/or the decoder can reduce the rate of transmission. In this thesis, the above scenario is modeled as an information theoretic problem. Efficient joint source-channel codes are discussed under various assumptions on side information and distortion criteria. Sufficient conditions for transmission of discrete/continuous alphabet sources with a given distortion over a discrete/continuous alphabet MAC are given. We recover various previous results as special cases from our results. Furthermore, we study the practically important case of the Gaussian MAC(GMAC) in detail and propose new joint source-channel coding schemes for discrete and continuous sources. Optimal schemes are identified in different scenarios. The protocols like TDMA, FDMA and CDMA are widely used across systems and standards. When these protocols are used the MAC becomes a system of orthogonal channels. Our general conditions can be specialized to obtain sufficient conditions for lossy transmission over this system. Using this conditions, we identify an optimal scheme for transmission of Gaussian sources over orthogonal Gaussian channels and show that the Amplify and Forward(AF) scheme performs close to the optimal scheme even at high SNR. Next we investigate transmission of correlated sources over a fast fading MAC with perfect or partial channel state information available at both the encoders and the decoder. We provide sufficient conditions for transmission with given distortions. We also provide power allocation policies for efficient transmission. Next, we use MAC with side information as a building block of a hierarchical sensor network. For Gaussian sources over Gaussian MACs, we show that AF performs well in such sensor network scenarios where the battery power is at a premium. We then extend this result to the hierarchical network scenario and show that it can perform favourably to the Slepian-Wolf based source coding and independent channel coding scheme. In a hierarchical sensor network the cluster heads often need to send only a function of the sensor observations to the fusion center. In such a setup the sensor nodes can compress the data sent to the cluster head exploiting the correlation in the data and also the structure of the function to be computed at the cluster head. Depending upon the function, exploiting the structure of the function can substantially reduce the data rate for transmission. We provide efficient joint source-channel codes for transmitting a general class of functions of the sources over the MAC.
102

On Asymmetric Distributed Source Coding For Wireless Sensor Networks

Samar, * 12 1900 (has links)
We are concerned with addressing the worst-case distributed source coding (DSC) problem in asymmetric and interactive communication scenarios and its application to data-gathering wireless sensor networks in enhancing their lifetime. First, we propose a unified canonical framework, obtained by considering different communication constraints and objectives, to address the variants of DSC problem. Second, as for the worst-case information-theoretic analysis, the notion of information entropy cannot be used, we propose information ambiguity, derive its various properties, and prove that it is a valid information measure. Third, for a few variants of our interest of DSC problem, we provide the communication protocols and prove their optimality. In a typical data-gathering sensor network, the base-station that wants to gather sensor data is often assumed to be much more resourceful with respect to energy, computation, and communication capabilities compared to sensor nodes. Therefore, we argue that in such networks, the base-station should bear the most of the burden of communication and computation in the network. Allowing the base-station and sensor nodes to interactively communicate with each other enables us to carry this out. Our definition of sensor network lifetime allows us to reduce the problem of maximizing the worst-case network lifetime to the problem of minimizing the number of bits communicated by the nodes in the worst-case, which is further reduced to the worst-case DSC problem in asymmetric and interactive communication scenarios, with the assumption that the base-station knows the support-set of sensor data. We demonstrate that the optimal solutions of the energy-oblivious DSC problem variants cannot be directly applied to the data-gathering sensor networks, as those may be inefficient in the energy-constrained sensor networks. We address a few energy-efficient variants of DSC problem and provide optimal communication protocols for the sensor networks, based on those variants. Finally, we combine distributed source coding with two other system level opportunities of channel coding and cooperative nature of the nodes to further enhance the lifetime of the sensor networks. We address various scenarios and demonstrate the dependence of the computational complexity of the network lifetime maximization problem on the complex interplay of above system-level opportunities.
103

Low-PAPR, Low-delay, High-Rate Space-Time Block Codes From Orthogonal Designs

Das, Smarajit 03 1900 (has links)
It is well known that communication systems employing multiple transmit and multiple receive antennas provide high data rates along with increased reliability. Some of the design criteria of the space-time block codes (STBCs) for multiple input multiple output (MIMO)communication system are that these codes should attain large transmit diversity, high data-rate, low decoding-complexity, low decoding –delay and low peak-to-average power ratio (PAPR). STBCs based on real orthogonal designs (RODs) and complex orthogonal designs (CODs) achieve full transmit diversity and in addition, these codes are single-symbol maximum-likelihood (ML) decodable. It has been observed that the data-rate (in number of information symbols per channel use) of the square CODs falls exponentially with increase in number of antennas and it has led to the construction of rectangular CODs with high rate. We have constructed a class of maximal-rate CODs for n transmit antennas with rate if n is even and if n is odd. The novelty of the above construction is that they 2n+1 are constructed from square CODs. Though these codes have a high rate, this is achieved at the expense of large decoding delay especially when the number of antennas is 5or more. Moreover the rate also converges to half as the number of transmit antennas increases. We give a construction of rate-1/2 CODs with a substantial reduction in decoding delay when compared with the maximal- rate codes. Though there is a significant improvement in the rate of the codes mentioned above when compared with square CODs for the same number of antennas, the decoding delay of these codes is still considerably high. For certain applications, it is desirable to construct codes which are balanced with respect to both rate and decoding delay. To this end, we have constructed high rate and low decoding-delay RODs and CODs from Cayley-Dickson Algebra. Apart from the rate and decoding delay of orthogonal designs, peak-to-average power ratio (PAPR) of STBC is very important from implementation point of view. The standard constructions of square complex orthogonal designs contain a large number of zeros in the matrix result in gin high PAPR. We have given a construction for square complex orthogonal designs with lesser number of zero entries than the known constructions. When a + 1 is a power of 2, we get codes with no zero entries. Further more, we get complex orthogonal designs with no zero entry for any power of 2 antennas by introducing co- ordinate interleaved variables in the design matrix. These codes have significant advantage over the existing codes in term of PAPR. The only sacrifice that is made in the construction of these codes is that the signaling complexity (of these codes) is marginally greater than the existing codes (with zero entries) for some of the entries in the matrix consist of co-ordinate interleaved variables. Also a class of maximal-rate CODs (For mathematical equations pl see the pdf file)
104

Belief Propagation Based Signal Detection In Large-MIMO And UWB Systems

Som, Pritam 09 1900 (has links)
Large-dimensional communication systems are likely to play an important role in modern wireless communications, where dimensions can be in space, time, frequency and their combinations. Large dimensions can bring several advantages with respect to the performance of communication systems. Harnessing such large-dimension benefits in practice, however, is challenging. In particular, optimum signal detection gets prohibitively complex for large dimensions. Consequently, low-complexity detection techniques that scale well for large dimensions while achieving near-optimal performance are of interest. Belief Propagation (BP) is a technique that solves inference problems using graphical models. BP has been successfully employed in a variety of applications including computational biology, statistical signal/image processing, machine learning and artificial intelligence. BP is well suited in several communication problems as well; e.g., decoding of turbo codes and low-density parity check codes (LDPC), and multiuser detection. We propose a BP based algorithm for detection in large-dimension linear vector channels employing binary phase shift keying (BPSK) modulation, by adopting a Markov random field (MRF)graphical model of the system. The proposed approach is shown to achieve i)detection at low complexities that scale well for large dimensions, and ii)improved bit error performance for increased number of dimensions (a behavior we refer to as the ’large-system behavior’). As one application of the BP based approach, we demonstrate the effectiveness of the proposed BP algorithm for decoding non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA)having large dimensions. We further improve the performance of the proposed algorithm through damped belief propagation, where messages that are passed from one iteration to the next are formed as a weighted combination of messages from the current iteration and the previous iteration. Next, we extend the proposed BP approach to higher order modulation. through a novel scheme of interference cancellation. This proposed scheme exhibits large system behavior in terms of bit error performance, while being scalable to large dimensions in terms of complexity. Finally, as another application of the BP based approach, we illustrate the adoption and performance of the proposed BP algorithm for low-complexity near-optimal equalization in severely delay-spread UWBMIMO-ISI channels that are characterized by large number (tens to hundreds)of multipath components.
105

A Filterbank Precoding Framework For MIMO Frequency Selective Channels

Vijaya, Krishna, A 08 1900 (has links)
Wireless systems with multiple antennas at both the transmitter and receiver (MIMO systems) have been the focus of research in the recent past due to their ability to provide higher data rates and better reliability than their single antenna counterparts. Designing a communication system for MIMO frequency selective channels provides many signal processing challenges. Popular methods like MIMOOFDM and space-time precoding linearly process blocks of data at both the transmitter and the receiver. Independence between the blocks is ensured by introducing sufficient redundancy between successive blocks. This approach has many pitfalls, including the limit on achievable data rate due to redundancy requirements and the need for additional coding/processing. In this thesis, we provide a filterbank precoding framework (FBP) for communication over MIMO frequency selective channels. By viewing the channel as a polynomial matrix, we derive the minimum redundancy required for achieving FIR equalization of the precoded channel. It is shown that, for most practical channels, a nominal redundancy is enough. The results are general, and hold for channels of any dimension and order. We derive the zero-forcing and MMSE equalizers for the precoded channel. The role of equalizer delay in system performance is analyzed. We extend the minimum redundancy result to the case of space-time filterbank precoding (STFP). Introducing the time dimension allows the channel to be represented by a block pseudocirculant matrix. By using the Smith form of block pseudocirculant matrices, we show that very high data rates can be achieved with STFP. When channel information is available at the transmitter, we derive an iterative algorithm for obtaining the MMSE optimal precoder-equalizer pair. We then provide a comparison of FBP with the block processing methods. It is shown that FBP provides better BER performance than the block processing methods at a lower computational cost. The reasons for the better performance of FBP are discussed.
106

A Polymorphic Finite Field Multiplier

Das, Saptarsi 06 1900 (has links) (PDF)
Cryptography algorithms like the Advanced Encryption Standard, Elliptic Curve Cryptography algorithms etc are designed using algebraic properties of finite fields. Thus performance of these algorithms depend on performance of the underneath field operations. Moreover, different algorithms use finite fields of widely varying order. In order to cater to these finite fields of different orders in an area efficient manner, it is necessary to design solutions in the form of hardware-consolidations, keeping the performance requirements in mind. Due to their small area occupancy and high utilization, such circuits are less likely to stay idle and therefore are less prone to loss of energy due to leakage power dissipation. There is another class of applications that rely on finite field algebra namely the various error detection and correction techniques. Most of the classical block codes used for detection of bit-error in communications over noisy communication channels apply the algebraic properties of finite fields. Cyclic redundancy check is one such algorithm used for detection of error in data in computer network. Reed-Solomon code is most notable among classical block codes because of its widespread use in storage devices like CD, DVD, HDD etc. In this work we present the architecture of a polymorphic multiplier for operations over various extensions of GF(2). We evolved the architecture of a textbook shift-and-add multiplier to arrive at the architecture of the polymorphic multiplier through a generalized mathematical formulation. The polymorphic multiplier is capable of morphing itself in runtime to create data-paths for multiplications of various orders. In order to optimally exploit the resources, we also introduced the capability of sub-word parallel execution in the polymorphic multiplier. The synthesis results of an instance of such a polymorphic multipliershowsabout41% savings in area with 21% degradation in maximum operating frequency compared to a collection of dedicated multipliers with equivalent functionality. We introduced the multiplier as an accelerator unit for field operations in the coarse grained runtime reconfigurable platform called REDEFINE. We observed about 40-50% improvement in performance of the AES algorithm and about 52×improvement in performance of Karatsuba-Ofman multiplication algorithm.
107

Algorithms For Efficient Implementation Of Secure Group Communication Systems

Rahul, S 11 1900 (has links)
A distributed application may be considered as a set of nodes which are spread across the network, and need to communicate with each other. The design and implementation of these distributed applications is greatly simplified using Group Communication Systems (GCSs) which provide multipoint to multipoint communication. Hence, GCSs can be used as building blocks for implementing distributed applications. The GCS is responsible for reliable delivery of group messages and management of group membership. The peer-to-peer model and the client-server model are the two models of distributed systems for implementing GCSs. In this thesis, our focus is on improving the capability of GCS based on the client-server model. Security is an important requirement of many distributed applications. For such applications, security has to be provided m the GCS itself. The security of a GCS includes confidentiality, authentication and non-repudiation of messages, and ensuring that the GCS is properly meeting its guarantees. The complexity and cost of implementation of the above three types of security guarantees greatly depend on whether the GCS servers are trusted by the group members or not. Making use of the GCS services provided by untrusted GCS servers becomes necessary when the GCS servers are managed by a third party. In this thesis, we have proposed algorithms for ensuring the above three security guarantees for GCSs in which servers are not trusted. As part of the solution, we have proposed a new digital multisignature scheme which allows group members to verify that a message has indeed been signed by all group members. The various group key management algorithms proposed in literature differ from each other with respect to the following four metrics: communication overhead, computational overhead, storage at each member and distribution of load among group members. We identify the need for a distributed group key management algorithm which minimizes the computational overhead on group members and propose an algorithm to achieve it.
108

One To Mant And Many To Many Collective Communication Operations On Grids

Gupta, Rakhi 12 1900 (has links)
Collective Communication Operations are widely used in MPI applications and play an important role in their performance. Hence, various projects have focused on optimization of collective communications for various kinds of parallel computing environments including LAN settings, heterogeneous networks and most recently Grid systems. The distinguishing factor of Grids from all the other environments is heterogeneity of hosts and network, and dynamically changing resource characteristics including load and availability. The first part of the thesis develops a solution for MPI broadcast (one-to-many) on Grids. Some current strategies take into consideration static information about network topology for determining an efficient broadcast tree for Grids. Some other strategies take into account only transient network characteristics. We combined both these strategies and cluster the network dynamically on the basis of link bandwidths. Given a set of network parameters we use Simulated Annealing (SA) to obtain the best schedule. Also, we can time tune individual. SAs, to adapt the solution finding process, on the basis of estimated available times before next broadcast invocations in the application. We also developed software architecture for updation of schedules. We compared our algorithm with the earlier approaches under loaded network conditions, and obtained average performance improvement of 20%. The second part of the thesis extends the work for MPI all gather (many-to-many) operation. Current popular techniques consider strict hierarchical schemes for this operation, wherein from each cluster a representative (or coordinator) node is chosen, and inter cluster communication is done through these representative nodes. This is non optimal as inter cluster communication is usually on high capacity links that can sustain more than one transfer with the same through- put. We developed a cluster based and incremental heuristic algorithm for allgather on Grids. We compared the time taken by allgather schedules determined by this algorithm with current popular implementations. We also compared our algorithm with a strategy where allgather is constructed from a set of broadcast trees. We obtained average performance improvement of 67% over existing strategies.
109

Energy Efficient Scheduling Of Sensing Activity In Wireless Sensor Networks Using Information Coverage

Vashistha, Sumit 01 1900 (has links)
Network lifetime is a key issue in wireless sensor networks where sensor nodes, distributed typically in remote/hostile sensing areas, are powered by finite energy batteries which are not easily replaced/recharged. Depletion of these finite energy batteries can result in a change in network topology or in the end of network life itself. Hence, prolonging the life of wireless sensor networks is important. Energy consumed in wireless sensor nodes can be for the purpose of i) sensing functions, ii) processing/computing functions, and ii) communication functions. For example, energy consumed by the transmit and receive electronics constitute the energy expended for communication functions. Our focus in this thesis is on the efficient use of energy for sensing. In particular, we are concerned with energy efficient algorithms for scheduling the sensing activity of sensor nodes. By scheduling the sensing activity we mean when to activate a sensor node for sensing (active mode) and when to keep it idle (sleep mode). The novel approach we adopt in this thesis to achieve efficient scheduling of sensing activity is an information coverage approach, rather than the widely adopted physical coverage approach. In the physical coverage approach, a point is said to be covered by a sensor node if that point lies within the physical coverage range (or the sensing radius) of that sensor, which is the maximum distance between the sensor and the point up to which the sensor can sense with acceptable accuracy. Information coverage, on the other hand, exploits cooperation among multiple sensors to accurately sense a point even if that point falls outside the physical coverage range of all the sensors. In this thesis, we address the question of how to schedule the activity of various sensor nodes in the network to enhance network lifetime using information coverage. In the first part of the thesis, we are concerned with scheduling of sensor nodes for sensing point targets using information coverage – example of a point-target being temperature or radiation level at a source or point that needs to be monitored. Defining a set of sensor nodes which collectively can sense a point-target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC) that can sense multiple point-targets in a given sensing area. Being disjoint, the resulting information covers in the proposed algorithm allow a simple round-robin schedule of sensor activation (i.e., activate the covers sequentially). We show that the covers obtained using the proposed DSIC algorithm achieve longer network life compared to the covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms. In the second part of the thesis, we extend the point target sensing problem in the first part to a full area sensing problem, where we are concerned with energy efficient ‘area-monitoring’ using information coverage. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage requirement (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime. The algorithms proposed for the point targets and full area sensing problems in the first two parts are essentially centralized algorithms. Decentralized algorithms are preferred in large networks. Accordingly, in the last part of the thesis, we propose a low-complexity, distributed sensor scheduling algorithm for full area sensing using information coverage. This distributed algorithm is shown to result in significantly longer sensing lifetimes compared to those achieved using physical coverage.
110

Perceptual Criterion Based Rate Control And Fast Mode Search For Spatial Intra Prediction In Video Coding

Nagori, Soyeb 05 1900 (has links)
This thesis dwells on two important problems in the field of video coding; namely rate control and spatial domain intra prediction. While the former is applicable generally to most video compression standards, the latter applies to recent advanced video compression standards such as H.264, VC1 and AVS. Rate control regulates the instantaneous video bit-rate to maximize a picture quality metric while satisfying channel rate and buffer size constraints. Rate control has an important bearing on the picture quality of encoded video. Typically, a quality metric such as Peak Signal-to-Noise ratio (PSNR) or weighted signal-to-noise ratio (WSNR) is chosen out of convenience. However neither metric is a true measure of perceived video quality. A few researchers have attempted to derive rate control algorithms with the combination of standard PSNR and ad-hoc perceptual metrics of video quality. The concept of using perceptual criterion for video coding was introduced in [7] within the context of perceptual adaptive quantization. In this work, quantization noise levels were adjusted such that more noise was allowed where it was less visible (busy and textured areas) while sensitive areas (typically flat and low detail regions) were finely quantized. Macro–blocks were classified into low detail, texture and edge areas depending on a classifier that studied the variance of sub-blocks within a macro-block (MB). The Rate models were trained from training sets of pre -classified video. One drawback of the above scheme as with standard PSNR was that neither accounts for the perceptual effect of motion. The work in [8] achieved this by assigning higher weights to the regions of the image that were experiencing the highest motion. Also, the center of the image and objects in the foreground are perceived as more important than the sides. However, attempts to use perceptual metrics for video quality have been limited by the accuracy of the video quality metrics chosen. In the recent years, new and improved metrics of subjective quality have been invented and their statistical accuracy has been studied in a formal manner. Particularly interesting is the work undertaken by ITU and the Video quality experts group (VQEG). VQEG conducted two phases of testing; in the first pha se, several algorithms were tested but they were not found to be very accurate, in fact none were found to be any more accurate than PSNR based metric. In the second phase of testing a few years later, a few new algorithms were experimented with, and it wa s concluded that four of these did achieve results good enough to warrant their standardization as a part of ITU –T Recommendation J.144. These experiments are referred to as the FR-TV (Full Reference Television) phase-II evaluations. ITU-T J.144 does not explicitly identify a single algorithm but provides guidelines on the selection of appropriate techniques to objectively measure subjective video quality. It describes four reference algorithms as well as PSNR. Amongst the four, the NTIA General Video Quality Model (VQM), [11] is the best performing and has been adopted by American National Standards Institute (ANSI) as a North American standard T1.801.03. NTIA’s approach has been to focus on defining parameters that model how humans perceive video quality. These parameters have been combined using linear models to produce estimates of video quality that closely approximate subjective test results. NTIA General Video Quality Model (VQM) has been proven to have strong correlation with subjective quality. In the first part of the thesis, we apply metrics motivated by NTIA-VQM model within a rate control algorithm to maximize perceptual video quality. We derive perceptual weights using key NTIA parameters to influence QP value used to decide degree of quantization. Our experiments demonstrate that a perceptual quality motivated standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 rate control algorithm that uses a PSNR metric. Our experimental results on a set of 11 sequences show on an average reduction of 6% in bitrate using the proposed algorithm for the same perceptual quality as standard TMN-8. The second part of our thesis work deals with spatial domain intra prediction used in advance video coding standard such as H.264. The H.264 Advanced Video coding standard [36] has been shown to achieve video quality similar to older standards such as MPEG2 and H.263 at nearly half the bit-rate. Generally, this compression improvement is attributed to several new tools that were introduced in H.264 – including spatial intra prediction, adaptive block size for motion compensation, in-loop de-blocking filter, context adaptive binary arithmetic coding (CABAC), and multiple reference frames. While the new tools allow better coding efficiency, they also introduce additi onal computational complexity at both encoder and decoder ends. We are especially concerned here on the impact of Intra prediction on the computational complexity of the encoder. H.264 reference implementations such as JM [29] search through all allowed intra-rediction “modes” in order to find the optimal mode. While this approach yields the optimal prediction mode, it comes at an extremely heavy computational cost. Hence there is a lot of interest into well -motivated algorithms that reduce the computational complexity of the search for the best prediction mode, while retaining the quality advantages of full-search Intra4x4. We propose a novel algorithm to reduce the complexity of full search by exploiting our knowledge of the source statistics. Specifically, we analyze the transform domain energy distribution of the original 4x4 block in different directions and use the results of our analysis to eliminate unlikely modes and reduce the search space for the optimal I ntra mode. Experimental results show that the proposed algorithm achieves quality metrics (PSNR) similar to full search at nearly a third of the complexity. This thesis has four chapters and is organized as follows, in the first chapter we introduce basics of video encoding and subsequently present exiting work in the area of perceptual rate control and introduce TMN-8 rate control algorithm in brief. At the end we introduce spatial domain intra prediction. In the second chapter we explain the challenges present in combining NTIA perceptual parameters with TMN8 rate control algorithm. We examine perceptual features used by NTIA from a video compression perspective and explain how the perceptual metrics capture typical compression artifacts. We next present a two pass perceptual rate control (PRCII) algorithm. Finally, we list experimental results on set of video sequences showing on an average of 6% bit-rate reduction by using PRC-II rate control over standard TMN-8 rate control. Chapter 3 contains part-II of our thesis work on, spatial domain intra prediction . We start by reviewing existing work in intra prediction and then present the details of our proposed intra prediction algorithm and experimental results. We finally conclude this thesis in chapter 4 and discuss direction for the future work on both our proposed algorithms.

Page generated in 0.1605 seconds