41 |
Direction of arrival estimation and beamforming for narrowband and wideband signalsKassem, Wafaa Ibrahim January 2005 (has links)
No description available.
|
42 |
Numerical noise reduction techniques in signal processingO'Sullivan, E. A. January 2005 (has links)
No description available.
|
43 |
Improving distributed speech recognition in packet lossJames, Alastair January 2006 (has links)
No description available.
|
44 |
Microprocessor based signal processing techniques for system identification and adaptive control of DC-DC convertersAlgreer, Maher Mohammed Fawzi Saber January 2012 (has links)
Many industrial and consumer devices rely on switch mode power converters (SMPCs) to provide a reliable, well regulated, DC power supply. A poorly performing power supply can potentially compromise the characteristic behaviour, efficiency, and operating range of the device. To ensure accurate regulation of the SMPC, optimal control of the power converter output is required. However, SMPC uncertainties such as component variations and load changes will affect the performance of the controller. To compensate for these time varying problems, there is increasing interest in employing real-time adaptive control techniques in SMPC applications. It is important to note that many adaptive controllers constantly tune and adjust their parameters based upon on-line system identification. In the area of system identification and adaptive control, Recursive Least Square (RLS) method provide promising results in terms of fast convergence rate, small prediction error, accurate parametric estimation, and simple adaptive structure. Despite being popular, RLS methods often have limited application in low cost systems, such as SMPCs, due to the computationally heavy calculations demanding significant hardware resources which, in turn, may require a high specification microprocessor to successfully implement. For this reason, this thesis presents research into lower complexity adaptive signal processing and filtering techniques for on-line system identification and control of SMPCs systems. The thesis presents the novel application of a Dichotomous Coordinate Descent (DCD) algorithm for the system identification of a dc-dc buck converter. Two unique applications of the DCD algorithm are proposed; system identification and self-compensation of a dc-dc SMPC. Firstly, specific attention is given to the parameter estimation of dc-dc buck SMPC. It is computationally efficient, and uses an infinite impulse response (IIR) adaptive filter as a plant model. Importantly, the proposed method is able to identify the parameters quickly and accurately; thus offering an efficient hardware solution which is well suited to real-time applications. Secondly, new alternative adaptive schemes that do not depend entirely on estimating the plant parameters is embedded with DCD algorithm. The proposed technique is based on a simple adaptive filter method and uses a one-tap finite impulse response (FIR) prediction error filter (PEF). Experimental and simulation results clearly show the DCD technique can be optimised to achieve comparable performance to classic RLS algorithms. However, it is computationally superior; thus making it an ideal candidate technique for low cost microprocessor based applications.
|
45 |
Received signal strength based localisation in wireless sensor networksAlhasanat, Abdullah Issmail January 2012 (has links)
Thousands of English as a Second Language students in Western universities strive to meet the daily challenge of preparing written assignments. These texts need to comply with the demands and preferences of their university lecturers with regard to clarity of meaning, the logical flow of ideas and the use of an academic vocabulary. However, a characteristic of ESL students’ written work is a weakness of content and a lack of logical organisation of their ideas (Roberts and Cimasko 2008). In many intensive English language programmes, students are taught to use the process-writing approach, the success of which is related to how it is perceived and introduced to the students (Lefkowitz 2009). Atkinson (2003) emphasised that the process-writing approach perceives writing to be a cognitive process that is highly private or individualistic, where writers use specific cognitive phases, such as pre-writing, drafting, and revising, to generate their text. However, writing has been increasingly recognized as a socially and culturally situated activity connecting people with each other in ways that carry particular social meanings (Hyland 2003). Despite this view of writing as a social act, Lefkowitz (2009) claimed that many English Language Programme Centres (ELPCs) superficially implement process-writing in class by aiding students in revising their essays to achieve grammatical accuracy; however the generation, formation and revision of ideas are considered to be of less importance. This study investigates the use of an electronic portfolio (TaskStream e-portfolio) in an ESL writing course as a tool to support students as they work through the key phases of the writing process. The aim was to help them adopt a consistent approach to their writing practice (self-consistency), to encourage a positive view of the value and importance of writing (self-belief), to foster a realistic appraisal of their strengths and weaknesses as writers (self-judgement), and to examine the relationship between these characteristics and the students’ overall writing performance. To that end, the study addressed four main questions: • Does utilising a web-based learning platform encourage a change in ESL learners’ writing self-belief? • Does utilising a web-based learning platform encourage a change in ESL students’ writing self-efficacy? • Does utilising a web-based learning platform encourage ESL students to consistently apply a process approach to writing? • Does utilizing a web-based learning platform lead to a change in ESL students’ overall writing performance? Using a non-equivalent pre-/post-test quasi-experimental research design, 46 ESL students from the same English Language Centre were recruited. The students were divided into a control group and an experimental group and the study ran during the spring and summer terms of 2010. A mixed methodology was used, consisting of an online questionnaire, writing sampling, online tracking and interviews in order to collect relevant data. The findings from the pre-test showed no significant differences between the participants in the two groups. The post-intervention results indicated no significant improvement among the control group’s motivational constructs and performance in writing, whereas significant differences were found in the experimental group’s writing performance and in the students’ perceived value with regard to writing, writing self-concept, writing self-efficacy and writing process approach self-consistency, following the implementation of the web-based course. However, no significant differences in ESL students’ anxiety about writing were observed. These findings suggested that e-portfolio software has the potential to promote change in ESL students’ writing self-belief and performance. Limitations of the study are discussed, implications of the findings explored, and recommendations for further research in this field are suggested.
|
46 |
Linear prediction approaches to compensation of missing measurement in Kalman filteringKhan, Naeem January 2012 (has links)
Kalrnan filter relies heavily on perfect knowledge of sensor readings, used to compute the minimum mean square error estimate of the system state. However in reality, unavailability of output data might occur due to factors including sensor faults and failures, confined memory spaces of buffer registers and congestion of communication channels. Therefore investigations on the effectiveness of Kalman filtering in the case of imperfect data have, since the last decade, been an interesting yet challenging research topic. The prevailed methodology employed in the state estimation for imperfect data is the open loop estimation wherein the measurement update step is skipped during data loss time. This method has several shortcomings such as high divergence rate, not regaining its steady states after the data is resumed, etc. This thesis proposes a novel approach, which is found efficient for both stationary and non- stationary processes, for the above scenario, based on linear prediction schemes. Utilising the concept of linear prediction, the missing data (output signal) is reconstructed through modified linear prediction schemes. This signal is then employed in Kalman filtering at the measure- ment update step. To reduce the computational cost in the large matrix inversions, a modified Levinson-Durbin algorithm is employed. It is shown that the proposed scheme offers promising results in the event of loss of observations and exhibits the general properties of conventional Kalman filters. To demonstrate the effectiveness of the proposed scheme, a rigid body spacecraft case study subject to measurement loss has been considered.
|
47 |
A FPGA based low-cost high speed QRD-RLS array processingGao, Qiang January 2012 (has links)
Over the last 30 years, Digital Signal Processing algorithm implementation has been driven by the continued progress and availability of high speed FPGA/ASIC circuit technology. The classic method of CORDIC (Coordinate Rotation DIgital Computer) arithmetic has been widely implemented as part of the computational requirements of the well known QR decomposition - Recursive Least Squares (RLS) algorithm. This thesis presents a novel FPGA implementation of a complex arithmetic valued QR decomposition which can function in adaptive filter applications and implement the adaptive filter weight extraction without using the traditional back-substitution method. In order to operate Givens rotation on a complex valued system, Double Angle Complex Rotation (DACR) is adopted to simplify the computational requirement of the classic Complex Givens Rotation (CGR). A new modified ‘processor-like’ architecture of a DACR based QR-RLS is presented. It features a single arithmetic Processing Element (PE) that has been extensively pipelined and efficiently shared for the implementation of the Givens transform associated with the QR algorithm. Annihilation-Reordering look-ahead Transformation (ART) was used to pipeline and hence speed up the real valued QR-RLS adaptive signal processing array without changing the filter’s convergence behavior. This thesis extends the ART technique to cope with a complex valued QR-RLS computation array, results in a novel Complex valued Annihilation-Reordering look-ahead Transformation (C-ART). The complex Givens rotation implemented in CORDIC arithmetic is suitable for the cut-set pipelining thus increasing the throughput.
|
48 |
The taut string methodology adapted to correlated and bivariate curvesTang, Judy January 2009 (has links)
The taut string and the multiresolution criterion methodology has been proven by Davies and Kovac (2001) [14], to be effective in the univariate i.i.d. Gaussian noise curve estimation problem in finding simple approximations with attractive modality properties. In this thesis we propose to extend the problem to correlated noise and also to noisy bivariate observations.
|
49 |
Linear and nonlinear polynomial based estimatorsNaz, Shamsher Ali January 2009 (has links)
The thesis is concerned with the solution of signal processing estimation problems using polynomial techniques. An effort has been made in this thesis to show that the polynomial approach has a potential which can be used for linear as well as nonUnear estimation problems (in single-input and single-output as well as in multi-input multi-output systems). The goal is to derive estimators that have a simple structure, are computationally efficient and may be implemented in real time systems easily in comparison to the existing techniques.
|
50 |
Nonlinear non-Gaussian algorithms for signal and image processingMorison, Gordon January 2007 (has links)
Tills thesis is initially concerned with solving the Blind Source Separation (BSS) problem. The BSS problem has been found to occur frequently in problems existing in various Scientific and Engineering application areas. The basic idea of the BSS problem is to separate a collection of mixed data into its underlying information components. To tackle the BSS problem two related methodologies have been utilized extensively throughout the literature. The first approach is by utilizing the statistical technique Independent Component Analysis (ICA). This method utilizes a transformation that maximizes the statistical independence of the mixed data components.
|
Page generated in 0.0139 seconds