• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30941
  • 2419
  • 2019
  • 2019
  • 2019
  • 2019
  • 2019
  • 2016
  • 1372
  • 915
  • 399
  • 240
  • 190
  • 168
  • 73
  • Tagged with
  • 44954
  • 40995
  • 17965
  • 11649
  • 8442
  • 8182
  • 7461
  • 4516
  • 4138
  • 3804
  • 3483
  • 3210
  • 3209
  • 3207
  • 3207
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Adaptive Lattice Filtering for Radar Applications

Gibson, James Carey January 1982 (has links)
<p>This thesis examines the lattice-structure prediction-error filter, and its application to air-traffic-control radar for the detection of targets (such as aircraft) obscured by clutter (unwanted reflections from the ground or weather systems). The digitally implemented lattice-structure filter adapts to and eliminates the clutter spectrum, producing an output only when a target causes a change in the input signal. Conventional MTI filters do not perform this detection as reliably.</p> <p>Adaptation to an input signal results from the recursive calculation of the lattice-structure filter's reflection coefficients. Six algorithms for this calculation were examined and compared using simulated radar data. A number of adaptive methods for continuously implementing these algorithms were also analysed. These included the standard gradient and least-squares methods, and two new methods developed in this thesis, the simple gradient and adaptive gradients methods. The harmonic-mean algorithm and the standard and simple gradient methods were selected as most appropriate for this application.</p> <p>The adaptive learning characteristics (both stationary and non-stationary) of these lattice methods were studied theoretically and experimentally, and quantitative relationships were developed describing their behaviour. The performance of the lattice-structure as a radar clutter filter was examined in terms of improvement factor, receiver-operator-characteristic, and sub-clutter visibility. Both simulated and actual radar data were used. The actual radar data included signals from aircraft, bird flocks, ground clutter, and several types of weather clutter. The performance of the lattice-structure filter with this data was found to be more consistent and consistently better than the conventional MTI filter.</p> / Doctor of Philosophy (PhD)
82

An Abstract Representation And Analysis of Production Lines With Inter-Stage Storage

Chan, Ming Kin January 1994 (has links)
<p>This thesis presents a novel methodology to represent a production system with inter-stage storages and variable cycle times. The 'abstract representation', which employs no approximation and is not based on probability, involves: (1) completely representing a system with stages and storages by elements only, each of which contains one stage and some storages, (2) measuring each element independently by supplying a predefined probing rate of job flow across an element and (3) formulating the instantaneous relationships between the independent measurements and system parameters.</p> <p>This 'abstract representation' is then illustrated by three different problems. The first problem involves on-line system monitoring which requires locating the origins of production loss in a system in real-time. The 'abstract representation' is used to derive the instantaneous relationships between the causes and effects of system production loss among the elements to address this problem.</p> <p>After the instantaneous relationships between the causes and effects of production loss are derived, a natural extension of this is to apply these relationships to address the problems concerning system performance improvements. Therefore, this thesis will address the second and third problems which are system production control and storage allocation.</p> <p>The objectives of production control as defined in (Ryzin et al. 93) are to control the system so that one job will be produced in one unit time and to minimize the inventory in the (infinite) inter-stage storage. In this thesis, an algorithm will be derived to address the problem of production control. This algorithm is based on the instantaneous relationships between the causes and effects of production loss and will be designed to theoretically accommodate systems of any size with arbitrary cycle time behaviours. The storage capacities of the system are finite and can have any value. Therefore, the minimization of inventory being carried in an infinite storage is not pan of the objective of the production control algorithm in this thesis. This production control algorithm will calculate the maximum allowable cycle time for each stage for each cycle, so that if these maximum allowable cycle times are met by the stages, the goal of production control can be achieved. Furthermore, the maximum allowable cycle times calculated by the production control algorithm in this thesis will be bounded below so that the maximum cycle time values should fall within an acceptable range.</p> <p>The goal of storage allocation in a system is to find the amount of storage space between the stages which yields the maximum system efficiency for a certain type of cycle time distribution while satisfying a set of pre-defined constraints. In this thesis, a preliminary study will be undertaken for this problem with the constraint being that the total available storage space is fixed. This preliminary study tries to show that the approximated long term relationships between the causes and effects of production loss can be helpful in providing insights which help solve the problem of storage allocation.</p> / Doctor of Philosophy (PhD)
83

Low Rate Encoding of Autoregressive Sources

Sethia, Madan L. 03 1900 (has links)
<p>Various approaches, traditional as well as non-traditional, are utilized to encode Gaussian and Laplacian distributed general autoregressive sources at rates of 1 and 2 bit per source letter. The performance of the traditional DPCM encoder is evaluated. At these low rates, DPCM turn out to be rather ineffective from a data compression point of view. Underlying laws governing the performance loss caused by the quantiser non-linearity in the predictor loop are detected experimentally. It is found that tree searching improves the performance substantially and the gain is a very well behaved function of some well known source statistics. Effect of tree searching on mismatched source predictor is examined: The results indicate that tree searching is not a substitute for a matched predictor. The performance of an intution-based smoothing filter in cascade with the DPCM encoder is evaluated when the predictor is matched as well as when mismatched to the source. Such smoothing is not helpful. Finally, a certain random coding scheme is used to rate 1. The performance of such as information theoretic inspired scheme is compared with the tree searched DPCM. Wherever appropriate, the relevance of results to low rate waveform encoding of speech is stressed.</p> / Master of Engineering (ME)
84

Design and Synthesis of Harmonic Surface Acoustic Wave Delay Lines

Naraine, Patrick M. 09 1900 (has links)
<p>Harmonic delay lines are necessary for building SAW oscillators in the 200 MHz - 1 GHz frequency range using conventional photolithographic techniques. The various methods of generating and filtering higher harmonic modes in Surface Acoustic Wave (SAW) delay lines are presented in this thesis. A new design called the "stepped-finger" design, for SAW transducers was also developed in this thesis. A stepped-finger harmonic delay line was built, tested and compared with the only other presently existing harmonic delay line structure, the 3- and 4-finger delay line. The experimental results obtained indicated that the stepped-finger delay line was the more efficient of the two.</p> / Master of Engineering (ME)
85

Radar Clutter Classification

Stehwien, Wolfgang 11 1900 (has links)
<p>The problem of classifying radar clutter as found on air traffic control radar systems is studied, and an algorithm is developed to carry out this classification automatically. The basis for the algorithm is Bayes decision theory and the parametric maximum a posteriori probability (MAP) classifier. This classifier employs a quadratic discriminant function and is optimum for feature vectors that are distributed according to the multivariate normal density. Separable clutter classes are most likely to arise from the analysis of the Doppler spectrum. Specifically, a feature set based on the complex reflection coefficients of the lattice prediction error filter (PEP) is proposed. These coefficients are also used in the maximum entropy method (MEM) of spectral estimation, and this link establishes many of their characteristics. A number of transformations are necessary, however, before they can be used as features.</p> <p>The classifier is thoroughly tested using data recorded from two L-band air traffic control radars at different sites. The collected data base contains extensive bird, rain, and ground clutter, as well as thunderstorms, aircraft and ground-based moving vehicle echoes. Their Doppler spectra are examined; and the properties of the feature set, computed using these data, are studied in terms of both the marginal and multivariate statistics. Several strategies involving different numbers of features, class assignments, and data set pretesting according to Doppler frequency and signal-to-noise ratio, were evaluated before settling on a workable algorithm. Final results are presented in terms of experimental misclassification rates and simulated and classified PPI displays.</p> / Doctor of Philosophy (PhD)
86

Parallel Implementations of the Kalman Filter for Tracking Applications

Lee, Kwang Bok Edward 03 1900 (has links)
<p>The first parallel implementations of the extended covariance Kalman filter (ECKF) and the extended square root covariance filter (ESRCF) for tracking applications are developed in this thesis. The decoupling technique and special properties in the tracking KF are explored to reduce computational requirements and to increase parallelism.</p> <p>The use of the decoupling technique to the ECKF eliminates the need for a matrix inversion, and results in the time and measurement updates of m decoupled (n/m)-dimensional state esimate error covariance P₀(k)'s instead of 1 coupled n-dimensional covariance matrix P(k), where m denotes the tracking dimension and n denotes the number of state elements.</p> <p>Similarly, the use of the decoupling technique to the ESRCF separates the time and measurement updates of 1 coupled P½(k) into those of m decoupled P₀½(k)'s.</p> <p>The updates of m decoupled matrices are found to require less computation than those of 1 coupled matrix, and they may be performed for each axis in parallel.</p> <p>In the parallel implementation of time and measurement updates of P(k) in the ECKF, the updates of m decoupled P₀(k)'s are found to require approximately m times less number of processing elements and clock cycles than the updates of 1 coupled P(k). Similarly, the parallel implementation of the updates of m decoupled P₀½(k)'s in the ESRCF requires approximately m time less number of processing elements and clock cycles than that for 1 coupled P½(k).</p> <p>The transformation of the Kalman gain which accounts for the decoupling of P(k) and P½(k) is found easy to implement.</p> <p>The sparse nature of the measurement matrix and the sparse, band nature of the transition are explored to simplify matrix multiplications.</p> / Doctor of Philosophy (PhD)
87

Architectural Features of WCRC - A Data Base Computer

Dumpala, Surya R. 03 1900 (has links)
<p>Several data base machine architectures have been proposed in the past few years. The next generation of these machines must support different data models on the same physical data simultaneously as envisaged in the ANSI/X3/SPARC report or the coexistence model.</p> <p>This thesis presents the architectural features of one such data base machine called Well Connected Relational Computer (WCRC). The overall architecture and the facilities to the user as well as the DBA have been described. A framework for the conceptual level and detailed design of the internal level are reported. The algorithms for schema conversion and view translation have been developed. The conceptual level language, WCRL is extended to accommodate data definition, data manipulation and storage definition facilities. A high level language, DBAL, is developed for the DBA. Two binary storage structures (Pseudo Canonical Partitions (PCP's - Options I and II) have been reported for storing the data at the internal level. They radically differ from the conventional n-ary relational storage structure. A machine oriented language (WCRML) is developed to directly execute the data base instructions in hardware on the PCP storage structures. The basic hardware organization of the internal level along with special hardware units for some data base functions such as join and sort have been reported. Finally, a performance evaluation of WCRC storage structures has been presented. The results indicate that WCRC requires lesser storage and offers faster query response time when compared to architectures based on n-ary relation storage.</p> / Master of Engineering (ME)
88

Novel Techniques and Architectures for Adaptive Beamforming

Ho, Van Thua 04 1900 (has links)
<p>Recent progress in VLSI technology has created a major impact on digital signal processing, including array signal processing. Proposals have been made for using high throughput processors for digital adaptive beamforming in radar and communications systems applications. In this thesis, novel techniques and architectures for adaptive beamforming will be developed and presented. These are typified by the development of adaptive beamforming algorithms for planar arrays and by a self-calibration algorithm for antenna arrays. The emphasis however will be placed on modern adaptive beamforming techniques in which the adaptation is carried out by means of a triangular systolic array processor performing the QH decomposition.</p> <p>Adaptive beamforming algorithms for a planar array or two-dimensional (2-D) adaptive beamforming algorithms, which are typified by the 2-D least-mean-squares (LMS) algorithm and 2-D Howells-Applebaum algorithm, are derived and presented. The concept of 2-D eigenbeams will be given to demonstrate the performance of the 2-D adaptive beamforming techniques. As well, the 2-D adaptive beamforming problem will be formulated in terms of the 1-D case with operation taking place along rows and columns of a planar array. The adaptive processor is then implemented by using a manifold of the least-squares triarray processors, which in the limit takes the form of a 3-D systolic array. It will be shown that the structure is capable of performing adaptation along the rows and columns of the 2-D array simultaneously.</p> <p>One of the major challenges that face workers in array processing is overcoming the degradation in the output of the high performance algorithms due to errors in the calibration of the array. A new self-calibration technique for solving this difficult problem will be derived and presented herein. The algorithm is based on the use of iteration - whereby the calibration coefficients are refined through repetitive imposition of the calibration procedure. Its derivation is based on the eigen-based method and the least-squares norm minimization. It will be shown that the algorithm is capable of automatical estimating the angles-of-arrival (AOA) of the received signals and calibrating the array with a minimum phase and gain errors. Results obtained by using both simulation and measurement data will be given. In the case of the experimental results, the measurement setup is subjected to multipath scenarios.</p> / Doctor of Philosophy (PhD)
89

A Network Model of Small Intestinal Electrical Activities

Carbajal, Victor M. 03 1900 (has links)
<p>An electronic circuit based on a modified version of the four branch Hodgkin-Huxley electrical equivalent circuit (Roy, 1972) has been proposed and implemented to simulate the pattern of the electrical activities present in the muscle cells of the mammalian small intestine.</p> <p>The analog's implementation comprises two main circuits to simulate these activities. One of them is concerned with generating subthreshold oscillations, while the other is basically a spike-generator circuit. Additional circuitry is included to interface them. Furthermore, the analog provides a parameter set by means of which its performance may be varied. Such settings may alter the intrinsic frequency, the magnitude of the depolarizing phase of the control potential for the response activity to occur, and also the frequency of the electrical response activity.</p> <p>Four such electronic oscillators, having different intrinsic frequencies, were coupled together in a chain structure with passive elements to simulate "frequency pulling" and "entrainment". The model qualitatively reproduced the observed pattern of electrical activities in the small intestine.</p> / Master of Engineering (ME)
90

Techniques for the Recognition of Silhouettes

Capson, David January 1981 (has links)
<p>A representative set of binary image processing techniques selected from the literature is described. The measurement of shape as the fundamental information contained in silhouettes is examined. Operations on digital binary images are demonstrated including smoothing, connectivity analysis and determination of position and orientation. The effects of digitizing errors at the boundary of a silhouette are discussed and examples of industrial vision systems which use binary images are presented.</p> <p>A binary image processing system has been designed and implemented. The apparatus is based on a General Electric TN2500 digital television camera and an Intel iSBC 86/12A microcomputer. Hardware for the acquisition of binary images from the camera is described followed by the software for calculating areas and centroids. The system is capable of "learning" a set of objects in a "Teach" mode and then making an identification based on their area in the "Run" mode.</p> / Master of Engineering (ME)

Page generated in 0.0706 seconds