• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 144
  • 111
  • 1
  • Tagged with
  • 3005
  • 341
  • 337
  • 263
  • 237
  • 208
  • 199
  • 181
  • 180
  • 151
  • 144
  • 121
  • 118
  • 112
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Low power JPEG2000 5/3 discrete wavelet transform algorithm and architecture

Tan, Kay-Chuan Benny January 2004 (has links)
With the advance in VLSI digital technology, many high throughput and performance imaging and video application had emerged and increased in usage. At the core of these imaging and video applications is the image and video compression technology. Image and video compression processes are by nature very computational and power consuming. Such high power consumption will shorten the operating time of a portable imaging and video device and can also cause overheating. As such, ways of making image and video compression processes inherently low power is needed. The lifting based Discrete Wavelet Transform (DWT) is increasingly used for compression digital image data and is the basis for the JPEG2000 standard (ISO/IEC 15444). Even though the lifting based DWT had aroused considerable implementation of this algorithm, there is no work on the low power realisation of such algorithm. Recent JPEG20O0 DWT implementations are pipelined data-path centric designs and do not consider the issue of power. This thesis therefore sets out to realise a low power JPEG2000 5/3 lifting based DWT hardware architecture and investigates whether optimising at both algorithmic and architectural level will yield a lower power hardware. Besides these, this research also ascertain whether the accumulating Arithmetic Logic Unit (ALU) centric processor architecture is more low power than the feed-through pipelined data-path centric processor architecture. A number of novel implementation schemes of the realisation of a low power JPEG2000 5/3 lifting based DWT hardware are proposed and presented in this thesis. These schemes target to reduce the switched capacitance by reducing the number of computational steps and data-path/arithmetic hardware through the manipulation of the lifting-based 5/3 DWT algorithm, operation scheduling and alteration to the traditional processor architecture. These resulted in a novel SA-ALU centric JPEG2000 5/3 lifting based DWT hardware architecture that saves about 25% of hardware with respect to the two presented existing 5/3 DWT lifting-based architecture.
42

Reconfigurable architectures for beyond 3G wireless communication systems

Zhan, Cheng January 2007 (has links)
Market requirements always influence the semiconductor industry. The coexistence of multiple standards, which exhibit distinct mobility and data rates, makes that a flexible convergence of current wireless standards and services is expected from beyond 3G systems. However, this trend needs a strong demand for underlying hardware architectures to achieve unprecedented performance, flexibility, low power consumption and time-to-market requirements. Since forward error correction algorithms demand the most computational cost of the whole physical layer system, this thesis employs two forward error correction cases, Viterbi decoder and double binary circular Turbo decoder, to investigate three potential reconfigurable hardware architectures for beyond 3G wireless communication system. Firstly, a domain specific reconfigurable Viterbi decoder fabric is introduced, which can support multiple Viterbi decoders with different constraint lengths and code rates. In addition, it also provides near ASIC performance in terms of power consumption and area. In order to further reduce the design and verification cost of this domain specific reconfigurable design, Chapter 4 presents another reconfigurable architecture which can be automatically generated and programmed by its associated CAD framework. Composed of heterogeneous coarse-grained processing units and a 2-D interconnection mesh, this reconfigurable architecture demonstrates significant power and area savings as compared with commercial FPGAs, RICA, reconfigurable instruction cell array, which is a dynamic reconfigurable architecture programmed by ANSI C, has been developed as a feasible solution for future wireless and multimedia applications. In Chapter 5, several advanced optimization approaches are proposed to efficiently implement the Viterbi decoder on RICA architecture. Furthermore, Chapter 6 and Chapter 7 demonstrate the implementation of a more complex application, double binary circular Turbo decoder. In Chapter 6, a system model is built to investigate the suitable decoding algorithm which can balance the decoding throughput and performance degradation. On the other hand, appropriate quantization scheme for the decoding implementation is devised based on a bit-true model. Finally, an optimized double binary circular Turbo decoder which can provide scalable decoding throughput is demonstrated on the RICA architecture.
43

A modal logic for handling behavioural constraints in formal hardware verification

Mendler, M. January 1992 (has links)
The application of formal methods to the design of correct computer hardware depends crucially on the use of abstraction mechanisms to partition the synthesis and verification task into tractable pieces. Unfortunately however, behavioural abstractions are genuine mathematical abstractions only up to behavioural <i>constraints, i.e</i>. under certain restrictions imposed on the device's environment. Timing constraints on input signals form an important class of such restrictions. Hardware components that behave properly only under such constraints satisfy their abstract specifications only approximately. This is an impediment to the naive approach to formal verification since the question of how to apply a theorem prover when one only knows <i>approximately</i> what formula to prove has not as yet been dealt with. In this thesis we propose, as a solution, to interpret the notion of 'correctness up to constraint' as a modality of intuitionistic predicate logic so as to remove constraints from the specification and to make them part of its proof. This provides for an 'approximate' verification of abstract specifications and yet does not compromise the rigour of the argument since a realizability semantics can be used to extract the constraints. Also, the abstract verification is separated from constraint analysis which in turn may be delayed arbitrarily. In the proposed framework constraint analysis comes down to proof analysis and a computational semantics on proofs may be used to manipulate and simplify constraints.
44

Perceptual techniques in audio quality assessment

Rix, Antony W. January 2003 (has links)
This thesis discusses quality assessment of audio communications systems, in particular telephone networks. A new technique for time-delay estimation based on a smoothed weighted histogram of frame-by-frame delays is presented. This has low complexity and is found to be more robust to non-linear distortions typical of telephone networks. This technique is further extended to identify piecewise constant delay, enabling models to be used for assessing packet-based transmission such as voice over IP, where delay may change several times during a measurement. It is shown that equalisation improves the accuracy of perceptual models for measurements that may include analogue or acoustic components. Linear transfer function estimation is found to be unreliable due to non-linear distortions. Spectral difference and phaseless cross-spectrum estimation methods for identifying and equalising the linear transfer function are implemented for this application, operating in the filter-bank and short-term Fourier spectrum domains. This thesis provides the first detailed examination of the process of selecting and mapping multiple objective perceptual distortion parameters to estimated subjective quality. The systematic variation of subjective opinion between tests is examined and addressed using a new method of monotonic polynomial regression. The effect on conventional regression techniques, and a new joint optimisation process, are considered.
45

Methodology for generation capacity and network reinforcement planning

Vovos, Panagis January 2005 (has links)
This thesis presents a novel methodology for generation expansion planning. The method is based on the Optimal Power Flow (OPF), a common tool for the economic operation of power systems. New generation capacity is simulated with the real power of virtual generators located at the candidate connection points, so OPF is used to plan generation expansion with respect to operating constraints of the existing network. A new method had to be developed for the direct incorporation of protection constraints in generation expansion. The modelling of new capacity with virtual generators gives access to power flow control variables. Binding constraints for generation expansion can be expressed as constrained functions of those variables. Accordingly, expected fault currents were expressed as functions of OPF variables and protection equipment specifications were converted to constraints for these functions. Thereafter, the allocation of new capacity by the OPF directly respects both system and fault constraints. The iterative approach has been proven less efficient than the later approach, but still maintains some advantages if the method is to be commercially exploited. Generator voltage control policies can also be converted to OPF constraints. The functionality of the suggested generation capacity allocation method was expanded to operate as an assessment tool of their impact on the amount of new capacity that a network can absorb. The method was expanded further, so as to consider the impact of capacity allocation on transmission losses. With a minor reformulation of the original method a new tool was designed for the optimal sitting of reactive power compensation banks for the implement of network headroom. Finally, a network planning method is presented based on the LaGrange multipliers, sensitivity by-products of the OPF solution method, which connect network constraints with generation expansion. Generation expansion is planned simultaneously with network reinforcement, so the overall optimum is achieved. The main conclusion of this work is that OPF can be used as a powerful planning, as well as operating tool. Its flexible formulation allows the incorporation of emerging constraints in generation and network expansion, such as those imposed by protection.
46

Low power techniques and architectures for multicarrier wireless receivers

Hasan, Mohd January 2003 (has links)
Power consumption is a critical issue in portable wireless communication. Multicarrier code division multiple access (MC-CDMA) has a significant potential to be included as a standard in the next generation of mobile communication. This thesis investigates new low power architectures for a MC-CDMA receiver. The FFT processor is one of the major power consuming blocks in multicarrier systems based on Orthogonal frequency division multiplexing (OFDM), like MC-CDMA, wireless LANs etc. Three low power schemes are presented for reducing the power consumption in FFT processors namely the order based processing, the coefficient memory reduction and the simplified coefficient addressing. The order based processing scheme is based on a novel concept of using either the normal or two's complement form for only the real part of the coefficients selectively to minimise the Hamming distance between successive coefficients fed to the multipliers. This significantly reduces the switching activity at the coefficient input of the multiplier and hence the power consumption. The coefficient memory reduction scheme exploits the relationship among the coefficient values to reduce the coefficient memory size from N/2 locations to ((N/8)+l) locations for an N-point FFT, thereby saving both area and power for long FFTs. The proposed coefficient addressing scheme implements the complete coefficient addressing for all stages of a radix-2 FFT processor by using a simple multiplexer instead of a cascade of Barrel shifters. Low power single butterfly radix-2 FFT processor and radix-4 ordered pipelined FFT processor architectures based on the novel order based processing scheme are also proposed. The ordered low power radix-4 FFT processor is combined with the combiner to realise a low power MC-CDMA receiver. The power consumption in a MC-CDMA receiver can be further reduced by introducing the concept of dynamically altering the complexity of the receiver in real time as per the changing channel parameters such as the delay spread, maximum Doppler frequency, transmission rate and signal to noise ratio instead of using a receiver designed for the worst case scenario. The FFT size in multicarrier systems like MC-CDMA varies from 16-point to 1024-points depending upon the channel parameters. This thesis has proposed a reconfigurable 256-point FFT processor architecture that can be configured in real time to act as a 64-point or 16-point FFT processor to prove the concept. The power reduction is significant in moving from a fixed 256-point FFT to a reconfigurable 256-point FFT provided that the FFT size is varying over a large range, which is indeed, the case for a MC-CDMA receiver. This power reduction is achieved by using an appropriate FFT size (shorter FFTs) by disabling the clocks of the higher stages in real time. A reconfigurable pipelined MC-CDMA receiver architecture is also proposed that can be configured in real time to process 256 or 64 sub-carriers on the basis of the channel parameters. The power saving is obtained by disabling the first stage and the last ordering stage of the FFT processor and also by disabling the unused equaliser memory in the Combiner by switching from 256 to 64 sub-carriers. An FIR filter is also an important block in wireless receivers. A number of novel low power FIR filter cores based on different low power algorithms and their hybrid have also been presented.
47

Precoding and multiuser scheduling in MIMO broadcast channels

Lee, Seung-Hwan January 2007 (has links)
Multiple input multiple output (MIMO) techniques are the most promising technologies for next-generation wireless systems to achieve improved channel reliability as well as high spectral efficiencies. In real MIMO downlink scenarios, the numbers of users are usually greater than that of transmit antennas at a base station and the base station is likely to provide a variety of services to different quality-of-service (QoS) users. Therefore, a multiuser scheduling algorithm for the real MIMO BC scenarios has to support a mixture of QoS users simultaneously by exploiting the performance gains of multiple antennas whilst maximizing the sum-rate capacity by selecting a user set for transmission according to performance criteria. The main topic of this thesis is the design of QoS-guaranteed multiuser schedulers for MIMO systems. This should provide different QoS services to different users whilst satisfying the system-level requirements such as fairness among users, minimum data rate and delay constraints as well as trying to maximize the sum-rate capacity of MIMO channels. For this, this thesis first investigates the performance of MIMO transceiver techniques in terms of error rates and the sum-rate capacity with practical considerations to select a practically appropriate MIMO precoding technique. Then a QoS-aware sequential multiuser selection algorithm is proposed, which selects a user set sequentially from each QoS group in order to satisfy QoS requirements by trading off the transmit antennas between different QoS groups. Using a temporally-correlated MIMO channel model validated by channel measurements, a statistical channel state information (SCSI)-assisted multiuser scheduling algorithm is also proposed, which can minimize the effect of the temporal correlation on the sum-rate capacity. Finally, new metrics are proposed to support fairness among users in terms of throughput or delay whilst maximising the sum-rate capacity. With these proposed algorithms, the objective of this thesis, to support a mixture of different QoS users simultaneously with fairness considerations whilst maximising the sum-rate capacity by exploiting the advantages of MIMO techniques with practical implementation in mind, can be achieved.
48

Ground target classification for airborne bistatic radar

Mishra, Amit Kumar January 2006 (has links)
Bistatic radar is the superset of monostatic radar system. Hence bistatic system might give certain advantages over the monostatic system in the present usages of monostatic radar system. Bistatic technology, if implemented successfully, can give rise to a wide spectrum of novel and innovative usages, which would have been impossible using the simpler monostatic system. Automatic target classification and recognition has been an area of active research for monostatic radars. This is also a major usage of an airborne radar system. Hence it is pertinent at the current stage to look at different aspects of automatic target recognition (ATR) using the synthetic aperture radar (SAR) image as collected by a bistatic radar system. This as applied to classification of ground targets has been the aim of the present project. Simulating a database of bistatic SAR images of ground targets using a generic electro magnetic (EM) computational tool is the first contribution of the present project. Major challenges in this approach consisted of selecting a usable and available EM simulator, modelling a selection of ground targets, finding a simple and effective algorithm to generate SAR images using the output from the EM simulator, developing a simple and efficient image formation algorithm for bistatic SAR image generation, and managing the database to be used efficiently in a classification task. All these challenges have been successfully tackled in the present project. The second contribution is an analysis of different aspects of bistatic SAR ATR. This consists of developing an efficient and fast ATR algorithm, studying the effect of clutter noise, bistatic angle, polarisation, k space support on bistatic ATR, comparison of monostatic and bistatic ATR, and suggestion of ways to improve bistatic ATR performance. In this it has been shown that contrary to popular expectations, bistatic ATR is not significantly worse than monostatic ATR. Given a proper ATR algorithm, the bistatic ATR performance could be made as good as if not better than the monostatic ATR performance. Lastly, the loss of ATR performance in the bistatic domain is more due to loss of image resolution as to any loss of image information. The last contribution of the project is the study of the usage of multipolar data in an ATR exercise. A group of different algorithms were developed to use the multipolar information for a better ATR performance. It was shown that, using multipolar data significantly improves the ATR performance, for some of the multipolar ATR algorithms the ATR performance was shown to be much more stable than the monopolar counterpart, and a new algorithm was proposed to use multipolar data so that the ATR performance becomes independent of the polarisation of the radar antenna in the test phase. It was also shown that bistatic multipolar data does hold information about the targets which could easily be exploited, as contrary to reservations held by experts in the past.
49

Safe data structure visualisation

Eyre-Todd, Richard A. January 1993 (has links)
A simple three layer scheme is presented which broadly categorises the types of support that a computing system might provide for program monitoring and debugging, namely hardware, language and external software support. Considered as a whole , the scheme forms a model for an integrated debugging-oriented system architecture. This thesis describes work which spans the upper levels of this architecture. A programming language may support debugging by preventing or detecting the use of objects that have no value. Techniques to help with this task such as formal verification, static analysis, required initialisation and default initialisation are considered. Strategies for tracking variable status at run-time are discussed. Novel methods are presented for adding run-time pointer variable checking to a language that does not normally support this facility. Language constructs that allow the selective control of run-time unassigned variable checking for scalar and composite objects are also described. Debugging at a higher level often involves the extensive examination of a program's data structures. The problem of visualising a particular kind of data structure, the hierarchic graph, is discussed using the previously described language level techniques to ensure data validity. The elementary theory of a class of two-level graphs is presented, together with several algorithms to perform a clustering technique that can improve graph layout and aid understanding.
50

Modelling interference in a CSMA/CA wireless network

Tsertou, Athanasia January 2006 (has links)
Initially, a systematic characterisation of all the possible ways in which two communicating pairs of nodes can interfere with each other is made. Using this as a building block and assuming independence of the stations, an estimate for the network throughput can be derived. The latter proves to be quite accurate for symmetric networks and manages to follow the performance trends in an arbitrary network. Following this, a more detailed Markovian-based mathematical model is proposed for the analysis of the hidden node case. This approach does not rely on common assumptions, such as renewal theory and node synchronisation, and is highly accurate, independently of the system parameters, unlike prior methods. Moreover, the usual decoupling approximation is not adopted; on the contrary, a joint view of the competing stations is taken into consideration. The model is firstly developed based on the assumption that the network stations employ a constant contention window for their backoff process. However, later in the thesis this assumption is relaxed, and performance curves are derived for the case when the stations employ the Binary Exponential Backoff Scheme, as is the case in practice. The Markovian state space is kept relatively small by employing an iterative technique that computes the unknown distributions. The adoption of this technique makes the analysis computationally efficient.

Page generated in 0.0289 seconds