• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 8
  • 7
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cyclostationary blind equalisation in mobile communications

Altuna, Jon January 1998 (has links)
Blind channel identification and equalisation are the processes by which a channel impulse response can be identified and proper equaliser filter coefficients can be obtained, without knowledge of the transmitted signal. Techniques that exploit cyclostationarity can reveal information about systems which are nonminimum phase; nonminimum phase channels cannot be identified using only second-order statistics (SOS), because these do not contain the necessary phase information. Cyclostationary blind equalisation methods exploit the fact that, sampling the received signal at a rate higher than the transmitted signal symbol rate, the received signal becomes cyclostationary. In general, cyclostationary blind equalisers can identify a channel with less data than higher-order statistics (HOS) methods, and unlike these, no constraint is imposed on the probability distribution function of the input signal. Nevertheless, cyclostationary methods suffer from some drawbacks, such as the fact that some channels are unidentifiable when they exhibit a number of zeros equally spaced around the unit circle. In this thesis the performance of a cyclostationary blind channel identification algorithm combined with a maximum-likelihood sequence estimation receiver is analysed. The simulations were conducted in the pan-European mobile communication system GSM environment and the performance of the blind technique was compared with conventional channel estimation methods using training. It is shown that although blind equalisation techniques can converge in a few hundred symbols in a time-invariant channel environment, the degradation with respect to methods with training is still considerable. Yet, the fact that a dedicated training sequence is not needed makes blind techniques attractive, because the data used for training purposes can be re-allocated as information data. In the concluding part of this thesis a new blind channel identification algorithm which combines methods that exploit cyclostationarity implicitly and explicitly is presented. It is shown that the properties of cyclostationary statistics are exploited in the new algorithm, and enhance the performance of the technique that solely exploits fractionally-spaced sampling. The algorithm is robust in the presence of correlated noise and interference from adjacent users.
2

Multi-hop relaying networks in TDD-CDMA systems

Rouse, Thomas S. January 2004 (has links)
The communications phenomena at the end of the 20th century were the Internet and mobile telephony. Now, entering the new millennium, an effective combination of the two should become a similarly everyday experience. Current limitations include scarce, exorbitantly priced bandwidth and considerable power consumption at higher data rates. Relaying systems use several shorter communications links instead of the conventional point-to-point transmission. This can allow for a lower power requirement and, due to the shorter broadcast range, bandwidth re-use may be more efficiently exploited. Code division multiple access (CDMA) is emerging as one of the most common methods for multi user access. Combining CDMA with time division duplexing (TDD) provides a system that supports asymmetric communications and relaying cost-effectively. The capacity of CDMA may be reduced by interference from other users, hence it is important that the routing of relays is performed to minimise interference at receivers. This thesis analyses relaying within the context of TDD-CDMA systems. Such a system was included in the initial draft of the European 3G specifications as opportunity driven multiple access (ODMA). Results are presented which demonstrate that ODMA allows for a more flexible capacity coverage trade-off than non-relaying systems. An investigation into the interference characteristics of ODMA shows that most interference occurs close to the base station (BS). Hence it is possible that in-cell routing to avoid the BS may increase capacity. As a result, a novel hybrid network topology is presented. ODMA uses path loss as a metric for routing. This technique does not avoid interference, and hence ODMA shows no capacity increase with the hybrid network. Consequently, a novel interference based routing algorithm and admission control are developed. When at least half the network is engaged in in-cell transmission, the interference based system allows for a higher capacity than a conventional cellular system. In an attempt to reduce transmitted power, a novel congestion based routing algorithm is introduced. This system is shown to have lower power requirement than any other analysed system and, when more than 2 hops are allowed, the highest capacity. The allocation of time slots affects system performance through co-channel interference. To attempt to minimise this, a novel dynamic channel allocation (DCA) algorithm is developed based on the congestion routing algorithm. By combining the global minimisation of system congestion in both time slots and routing, the DCA further increases throughput. Implementing congestion routed relaying, especially with DCA, in any TDD-CDMA system with in-cell calls can show significant performance improvements over conventional cellular systems.
3

Substructural simple type theories for separation and in-place update

Atkey, Robert January 2006 (has links)
This thesis studies two substructural simple type theories, extending the "separation" and "number-of-uses" readings of the basic substructural simply typed lambda-calculus with exchange. The first calculus, lambda_sep, extends the alpha lambda-calculus of O'Hearn and Pym by directly considering the representation of separation in a type system. We define type contexts with separation relations and introduce new type constructors of separated products and separated functions. We describe the basic metatheory of the calculus, including a sound and complete type-checking algorithm. We then give new categorical structure for interpreting the type judgements, and prove that it coherently, soundly and completely interprets the type theory. To show how the structure models separation we extend Day's construction of closed symmetric monoidal structure on functor categories to our categorical structure, and describe two instances dealing with the global and local separation. The second system, lambda_inplc, is a re-presentation of substructural calculus for in-place update with linear and non-linear values, based on Wadler's Linear typed system with non-linear types and Hofmann's LFPL. We identify some problems with the metatheory of the calculus, in particular the failure of the substitution rule to hold due to the call-by-value interpretation inherent in the type rules. To resolve this issue, we turn to categorical models of call-by-value computation, namely Moggi's Computational Monads and Power and Robinson's Freyd-Categories. We extend both of these to include additional information about the current state of the computation, defining Parameterised Freyd-categories and Parameterised Strong Monads. These definitions are equivalent in the closed case. We prove that by adding a commutativity condition they are a sound class of models for lambda_inplc. To obtain a complete class of models for lambda_inplc we refine the structure to better match the syntax. We also give a direct syntactic presentation of Parameterised Freyd-categories and prove that it is soundly and completely modelled by the syntax. We give a concrete model based on Day's construction, demonstrating how the categorical structure can be used to model call-by-value computation with in-place update and bounded heaps.
4

Nonlinear noise cancellation

Strauch, Paul E. January 1997 (has links)
Noise or interference is often assumed to be a random process. Conventional linear filtering, control or prediction techniques are used to cancel or reduce the noise. However, some noise processes have been shown to be nonlinear and deterministic. These nonlinear deterministic noise processes appear to be random when analysed with second order statistics. As nonlinear processes are widespread in nature it may be beneficial to exploit the coherence of the nonlinear deterministic noise with nonlinear filtering techniques. The nonlinear deterministic noise processes used in this thesis are generated from nonlinear difference or differential equations which are derived from real world scenarios. Analysis tools from the theory of nonlinear dynamics are used to determine an appropriate sampling rate of the nonlinear deterministic noise processes and their embedding dimensions. Nonlinear models, such as the Volterra series filter and the radial basis function network are trained to model or predict the nonlinear deterministic noise process in order to reduce the noise in a system. The nonlinear models exploit the structure and determinism and, therefore, perform better than conventional linear techniques. These nonlinear techniques are applied to cancel broadband nonlinear deterministic noise which corrupts a narrowband signal. An existing filter method is investigated and compared with standard linear techniques. A new filter method is devised to overcome the restrictions of the existing filter method. This method combines standard signal processing concepts (filterbanks and multirate sampling) with linear and nonlinear modelling techniques. It overcomes the restrictions associated with linear techniques and hence produces better performance. Other schemes for cancelling broadband noise are devised and investigated using quantisers and cascaded radial basis function networks. Finally, a scheme is devised which enables the detection of a signal of interest buried in heavy chaotic noise. Active noise control is another application where the acoustic noise may be assumed to be a nonlinear deterministic process. One of the problems in active noise control is the inversion process of the transfer function of the loudspeaker. This transfer function may be nonminimum phase. Linear controllers only perform sub-optimally in modelling the noncausal inverse transfer function. To overcome this problem in conjunction with the assumption that the acoustic noise is nonlinear and deterministic a combined linear and nonlinear controller is devised. A mathematical expression for the combined controller is derived which consists of a linear system identification part and a nonlinear prediction part. The traditional filtered-x least mean squares scheme in active noise control does not allow the implementation of a nonlinear controller. Therefore, a control scheme is devised to allow a nonlinear controller in conjunction with an adaptive block least squares algorithm. Simulations demonstrate that the combined linear and nonlinear controller outperforms the conventional linear controller.
5

Analysis of the impact of impulse noise in digital subscriber line systems

Nedev, Nedko H. January 2003 (has links)
In recent years, Digital subscriber line (DSL) technology has been gaining popularity as a high speed network access technology, capable of the delivery of multimedia services. A major impairment for DSL is impulse noise in the telephone line. However, evaluating the data errors caused by this noise is not trivial due to its complex statistical nature, which until recently had not been well understood, and the complicated error mitigation and framing techniques used in DSL systems. This thesis presents a novel analysis of the impact of impulse noise and the DSL framing parameters on transmission errors, building on a recently proposed impulse noise model. It focuses on errors at higher protocol layers, such as asynchronous transfer mode (ATM), in the most widely used DSL version, namely Asymmetric DSL (ADSL). The impulse noise is characterised statistically through its amplitudes, duration, inter-arrival times, and frequency spectrum, using the British Telecom / University of Edinburgh / Deutsche Telekom (BT/UE/DT) model. This model is broadband, considers both the time and the frequency domains, and accounts for the impulse clustering. It is based on recent measurements in two different telephone networks (the UK and Germany) and therefore is the most complete model available to date and suited for DSL analysis. A new statistical analysis of impulse noise spectra from DT measurements shows that impulse spectra can be modelled with three spectral components with similar bandwidth statistical distributions. Also, a novel distribution of the impulse powers is derived from the impulse amplitude statistics. The performance of a generic ADSL modem is investigated in an impulse noise and crosstalk environment for different bit rates and framing parameters. ATM cell and ADSL frame error rates, and subjective MPEG2 video quality are used as performance metrics. A new modification of a bit loading algorithm is developed to enable stable convergence of the algorithm with trellis coding and restricted subtone constellation size. It is shown that while interleaving brings improvement if set at its maximum depth, at intermediate depths it actually worsens the performance of all considered metrics in comparison with no interleaving. No such performance degradation is caused by combining several symbols in a forward error correction (FEC) codeword, but this burst error mitigation technique is only viable at low bit rates. Performance improvement can also be achieved by increasing the strength of FEC, especially if combined with interleaving. In contrast, trellis coding is ineffective against the long impulse noise error bursts. Alien as opposed to kindred crosstalk degrades the error rates and this is an important issue in an unbundled network environment. It is also argued that error free data units is a better performance measure from a user perspective than the commonly used error free seconds. The impact of impulse noise on the errors in DSL systems has also been considered analytically. A new Bernoulli-Weibull impulse noise model at symbol level is proposed and it is shown that other models which assume Gaussian distributed impulse amplitudes or Rayleigh distributed impulse powers give overly optimistic error estimates in DSL systems. A novel bivariate extension of the Weibull impulse amplitudes is introduced to enable the analysis of orthogonal signals. Since an exact closed-form expression for the symbol error probability of multi-carrierQAM assuming Bernoulli-Weibull noise model does not exist, this problem has been solved numerically. Multi-carrier QAM is shown to perform better at high signal-to-noise ratio (SNR), but worse at low SNR than single carrier QAM, in both cases because of the spreading of noise power between subcarriers. Analytical expressions for errors up to frame level in the specific case of ADSL are then derived from the impulse noise model, with good agreement with simulation results. The Bernoulli-Weibull model is applied to study the errors in single-pair highspeed DSL (SHDSL). The performance of ADSL is found to be better when the burst error mitigation techniques are used, but SHDSL has advantages if low bit error rate and low latency are required.
6

The Knowing : a Fantasy ; An epistemological enquiry into creative process, form, and genre

Manwaring, Kevan January 2018 (has links)
This creative writing PhD thesis consists of a novel and a critical reflective essay. Both articulate a distinctive approach to the challenges of writing genre fiction in the 21st Century that I define as 'Goldendark' - one that actively engages with the ethical and political implications of the field via the specific aesthetic choices made about methodology, content, and form. The Knowing: A Fantasy is a novel written in the High Mimetic style that, through the story of Janey McEttrick, a Scottish-Cherokee musician descended from the Reverend Robert Kirk, a 17th Century Episcopalian minister from Aberfoyle (author of the 1691 monograph, The Secret Commonwealth of Elves, Fauns and Fairies), fictionalises the diasporic translocation of song- and tale-cultures between the Scottish Lowlands and the Southern Appalachians, and is a dramatisation of the creative process. In the accompanying critical reflective essay, 'An Epistemological Enquiry into Creative Process, Form and Genre', I chart the development of my novel: its initial inspiration, my practice-based research, its composition and completion, all informed both by my practice as a storyteller/poet and by my archival discoveries. In the section 'Walking Between Worlds' I articulate my methodology and seek to defend experiential research as a multi-modal approach - one that included long-distance walking, illustration, spoken word performance, ballad-singing and learning an instrument. In 'Framing the Narrative' I discuss matters of form - how I engaged with hyperfictionality and digital technology in destabilising traditional conventions of linear narrative and generic expectation. Finally, in 'Defining Goldendark' I articulate in detail my approach to a new ethical aesthetics of the fantasy genre.
7

Signal processing for airborne bistatic radar

Ong, Kian P. January 2003 (has links)
The major problem encountered by an airborne bistatic radar is the suppression of bistatic clutter. Unlike clutter echoes for a sidelooking airborne monostatic radar, bistatic clutter echoes are range dependent. Using training data from nearby range gates will result in widening of the clutter notch of STAP (space-time adaptive processing) processor. This will cause target returns from slow relative velocity aircraft to be suppressed or even go undetected. Some means of Doppler compensation for mitigating the clutter range dependency must be carried out. This thesis investigates the nature of the clutter echoes with different radar configurations. A novel Doppler compensation method using Doppler interpolation in the angle-Doppler domain and power correction for a JDL (joint domain localized) processor is proposed. Performing Doppler compensation in the Doppler domain, allows several different Doppler compensations to be carried out at the same time, using separate Doppler bins compensation. When using a JDL processor, a 2-D Fourier transformation is required to transform space-time domain training data into angular-Doppler domain. Performing Doppler compensation in the spacetime domain requires Fourier transformations of the Doppler compensated training data to be carried out for every training range gate. The whole process is then repeated for every range gate under test. On the other hand, Fourier transformations of the training data are required only once for all range gates under test, when using Doppler interpolation. Before carrying out any Doppler compensation, the peak clutter Doppler frequency difference between the training range gate and the range gate under test, needs to be determined. A novel way of calculating the Doppler frequency difference that is robust to error in pre-known parameters is also proposed. Reducing the computational cost of the STAP processor has always been the desire of any reduced dimension processors such as the JDL processor. Two methods of further reducing the computational cost of the JDL processor are proposed. A tuned DFT algorithm allow the size of the clutter sample covariance matrix of the JDL processor to be reduced by a factor proportional to the number of array elements, without losses in processor performance. Using alternate Doppler bins selection allows computational cost reduction, but with performance loss outside the clutter notch region. Different systems parameters are also used to evaluate the performance of the Doppler interpolation process and the JDL processor. Both clutter range and Doppler ambiguity exist in radar systems operating in medium pulse repetitive frequency mode. When suppressing range ambiguous clutter echoes, performing Doppler compensation for the clutter echoes arriving from the nearest ambiguous range alone, appear to be sufficient. Clutter sample covariance matrix is estimated using training data from the range or time or both dimension. Investigations on the number of range and time training data required for the estimation process in both space-time and angular-Doppler domain are carried out. Due to error in the Doppler compensation process, a method of using the minimum amount of range training data is proposed. The number of training data required for different clutter sample covariance matrix sizes is also evaluated. For Doppler interpolation and power correction JDL processor, the number of Doppler bins used can be increased, to reduce the amount of training data required, while maintaining certain desirable processor performance characteristics.
8

A proof planning framework for Isabelle

Dixon, Lucas January 2006 (has links)
Proof planning is a paradigm for the automation of proof that focuses on encoding intelligence to guide the proof process. The idea is to capture common patterns of reasoning which can be used to derive abstract descriptions of proofs known as proof plans. These can then be executed to provide fully formal proofs. This thesis concerns the development and analysis of a novel approach to proof planning that focuses on an explicit representation of choices during search. We embody our approach as a proof planner for the generic proof assistant Isabelle and use the Isar language, which is human-readable and machine-checkable, to represent proof plans. Within this framework we develop an inductive theorem prover as a case study of our approach to proof planning. Our prover uses the difference reduction heuristic known as rippling to automate the step cases of the inductive proofs. The development of a flexible approach to rippling that supports its various modifications and extensions is the second major focus of this thesis. Here, our inductive theorem prover provides a context in which to evaluate rippling experimentally. This work results in an efficient and powerful inductive theorem prover for Isabelle as well as proposals for further improving the efficiency of rippling. We also draw observations in order to direct further work on proof planning. Overall, we aim to make it easier for mathematical techniques, and those specific to mechanical theorem proving, to be encoded and applied to problems.
9

Generating synthetic pitch contours using prosodic structure

Clark, Robert A. J. January 2003 (has links)
This thesis addresses the problem of generating a range of natural sounding pitch contours for speech synthesis to convey the specific meanings of different intonation patterns. Where other models can synthesise intonation adequately for short sentences, longer sentences often sound unnatural as phrasing is only really considered at the sentence level. We build models within a framework of prosodic structure derived from the linguistic analysis of a corpus of speech. We show that the use of appropriate prosodic structure allows us to produce better contours for longer sentences and allows us to capture the original style of the corpus. The resulting model is also sufficiently flexible to be adapted to suitable styles for use in other domains. To convey specific meanings we need to be able to generate different accent types. We find that the infrequency of some accent and boundary types makes them hard to model from the corpus alone. We address this issue by developing a model which allows us to isolate the parameters which control specific accent type shapes, so that we can reestimate these parameters based on other data.
10

An attentional theory of continuity editing

Smith, Tim J. January 2006 (has links)
The intention of most film editing is to create the impression of continuous action (“continuity”) by presenting discontinuous visual information. The techniques used to achieve this, the continuity editing rules, are well established yet there exists no understanding of their cognitive foundations. This thesis attempts to correct this oversight by proposing that “continuity” is actually what perceptual and developmental psychologists refer to as existence constancy (Michotte, 1955): “the experience that objects persist through space and time despite the fact that their presence in the visual field may be discontinuous” (Butterworth, 1991). The main conclusion of this thesis is that continuity editing ensures existence constancy by creating conditions under which a) the visual disruption created by the cut does not capture attention, b) existence constancy is assumed, and c) expectations associated with existence constancy are accommodated after the cut. Continuity editing rules are shown to identify natural periods of attention withdrawal that can be used to hide cuts. A reaction time study shows that one such period, a saccadic eye movement, occurs when an object is occluded by the screen edge. This occlusion has the potential to create existence constancy across the cut. After the cut, the object only has to appear when and where it is expected for it to be perceived as continuing to exist. This spatiotemporal information is stored in a visual index (Pylyshyn, 1989). Changes to the object’s features (stored in an object file; Kahneman, Treisman, & Gibbs, 1992), such as those caused by the cut, will go unnoticed. A duration estimation study shows that these spatiotemporal expectations distort due to the attention withdrawal. Continuity editing rules show evidence of accommodating these distortions to create perceived continuity from discontinuous visual information. The outcome of this thesis is a scientific understanding of filmic continuity. This permits filmmakers greater awareness of the perceptual consequences of their editing decisions. It also informs cognitive scientists of the potential of film as an analogue for real-world perception that exposes the assumptions, limitations, and constraints imposed upon our perception of reality.

Page generated in 0.0425 seconds