• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3519
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 62
  • 4
  • Tagged with
  • 6061
  • 6061
  • 6061
  • 560
  • 518
  • 474
  • 372
  • 351
  • 282
  • 260
  • 237
  • 232
  • 187
  • 184
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Advances in Space Mapping Technology Exploiting Implicit Space Mapping and Output Space Mapping

Cheng, Qingsha S. 02 1900 (has links)
<p>This thesis contributes to advances in Space Mapping (SM) technology in computer-aided modeling, design and optimization of engineering components and devices. Our developments in modeling and optimization of microwave circuits include the SM framework and SM-based surrogate modeling; implicit SM optimization exploiting preassigned parameters; implicit, frequency and output SM surrogate modeling and design; an SM design framework and implementation techniques. We review the state of the art in space mapping and the SM-based surrogate (modeling) concept and applications. In the review, we recall proposed SM-based optimization approaches including the original algorithm, the Broydenbased aggressive SM algorithm, various trust region approaches, neural space mapping and implicit space mapping. Parameter extraction (PE) is developed as an essential SM subproblem. Different approaches to enhance uniqueness of PE are reviewed. Novel physical illustrations are presented, including the cheesecutting problem. A framework of space mapping steps is extracted. Implicit Space Mapping (ISM) optimization exploits preassigned parameters. We introduce ISM and show how it relates to the now well established (explicit) space mapping between coarse and fine device models. Through comparison a general space- mapping concept is proposed. A simple ISM algorithm is implemented. It is illustrated on the contrived "cheese-cutting problem" and applied to EM-based microwave modeling and design. An auxiliary set of parameters (selected preassigned parameters) is extracted to match the coarse model with the fine model. The calibrated coarse model (the surrogate) is then (re)optimized to predict an improved fine model solution. This is an easy SM technique to implement since the mapping itself is embedded in the calibrated coarse model and updated automatically in the procedure of parameter extraction. We discuss the enhancement of the ISM by "output space" mapping (OSM) specifically, response residual space mapping (RRSM), when the model cannot be aligned. ISM calibrates a suitable coarse (surrogate) model against a fine model (full-wave EM simulation) by relaxing certain coarse model preassigned parameters. Based on an explanation of residual response misalignment, our new approach further fine-tunes the surrogate by the RRSM. We present an RRSM approach. A novel, simple "multiple cheese-cutting" problem illustrates the technique. The approach is implemented entirely in the Agilent ADS design environment. A new design framework which implements various SM techniques is presented. We demonstrate the steps, for microwave devices, within the ADS (2003) schematic design framework. The design steps are friendly. The framework runs with Agilent Momentum, HFSS and Sonnet em. Finally, we review various engineering applications and implementations of the SM technique.</p> / Doctor of Philosophy (PhD)
102

Optimal Lattice Codes For the Gaussian Channel

Kassem, Walid January 1981 (has links)
<p>Lattices are used to construct a class of equal-energy codes for the Gaussian channel and the resultant error probability tends to zero for large n at all rates below channel capacity. The error probability is explicitly bounded for any given lattice code and then further further bounded for a general code using the Minkowski-Hlawka theorem of the geometry of numbers. Similar bounds are applied also to maximum-energy codes, to show that such lattice codes are near-optimal.</p> <p>Finally, the error bounds are applied to explicit codes defined for all n=2ᵐ. These codes are shown to have a low Pℯ at rates higher than any previously attained.</p> / Doctor of Philosophy (PhD)
103

Eigenvalue Sensitivities Applied to Power System Dynamics

Elrazaz, Zaglol S. 05 1900 (has links)
<p>In the search for an adequate and efficient method for power system dynamic stability analysis, it is illustrated in this thesis that eigenvalues, eigenvectors and their sensitivities with respect to system parameters are very important and useful tools.</p> <p>The eigenvalue-eigenvector sensitivities are generalized by deriving expressions for the Nth-order sensitivities. These expressions are recursive in nature, hence the calculations of the high-order terms do not involve too much additional computation, but lead to considerable improvements in evaluating the actual changes in the eigenvalues and eigenvectors due to large variations in the system parameters.</p> <p>A comprehensive and efficient eigenvalue tracking approach has been presented to track a subset of the system eigenvalues over a wide range of parameter variations.</p> <p>We have achieved an interesting result that the first- and the Nth-order sensitivities of any eigenvalue of the aggregated model with respect to a certain parameter of the original system are identical to the corresponding sensitivities of the same eigenvalue of the original system with respect to that parameter regardless of the choice of the aggregation matrix.</p> / Doctor of Philosophy (PhD)
104

Deriving real-time monitors from system requirements documentation

Peters, Dennis K. January 2000 (has links)
<p>When designing safety- or mission-critical real-time systems, a specification of the required behaviour of the system should be produced and reviewed by domain experts. Also, after the system has been implemented, it should be thoroughly tested to ensure that it behaves correctly. This, however, can be difficult if the requirements are complex or involve strict time constraints. A monitor is a system that observes the behaviour of a target system and reports if that behaviour is consistent with the requirements. Such a monitor can be used as an oracle during testing or as a supervisor during operation. This thesis presents a technique and tool for generating software for such a monitor from a system requirements document. A system requirements documentation technique, based on [102], is presented, in which the required system behaviour is described in terms of the environmental quantities that the system is required to observe and control, which are modelled as the initial conditions and a sequence of events. The required value of all controlled quantities is specified, possibly using modes --equivalence classes of histories--to simplify the presentation. Deviations from the ideal behaviour are described using either tolerance or accuracy functions. The monitor will be affected by the limitations of the devices it uses to observe the environmental quantities, resulting in the potential for false negative or positive reports. The conditions under which these occur are discussed. The generation of monitor software from the requirements documentation for a realistic system is presented. This monitor is used to test an implementation of the system, and is able to detect errors in the behaviour that were not detected by previous testing. For this example the time required for the monitor software to evaluate the behaviour is less than the interval between events.</p> / Doctor of Philosophy (PhD)
105

Neural network sensor fusion: Creation of a virtual sensor for cloud-base height estimation

Pasika, Joseph Christopher Hugh January 1999 (has links)
<p>Sensor fusion has become a significant area of signal processing research that draws on a variety of tools. Its goals are many, however in this thesis, the creation of a virtual sensor is paramount. In particular, neural networks are used to simulate the output of a LIDAR (LASER. RADAR) that measures cloud-base height. Eye-safe LIDAR is more accurate than the standard tool that would be used for such measurement; the ceilometer. The desire is to make cloud-base height information available at a network of ground-based meteorological stations without actually installing LIDAR sensors. To accomplish this, fifty-seven sensors ranging from multispectral satellite information to standard atmospheric measurements such as temperature and humidity, are fused in what can only be termed as a very complex, nonlinear environment. The result is an accurate prediction of cloud-base height. Thus, a virtual sensor is created. A total of four different learning algorithms were studied; two global and two local. In each case, the very best state-of-the-art learning algorithms have been selected. Local methods investigated are the regularized radial basis function network, and the support vector machine. Global methods include the standard backpropagation with momentum trained multilayer perceptron (used as a benchmark) and the multilayer perceptron trained via the Kalman filter algorithm. While accuracy is the primary concern, computational considerations potentially limit the application of several of the above techniques. Thus, in all cases care was taken to minimize computational cost. For example in the case of the support vector machine, a method of partitioning the problem in order to reduce memory requirements and make the optimization over a large data set feasible was employed and in the Kalman algorithm case, node-decoupling was used to dramatically reduce the number of operations required. Overall, the methods produced somewhat equivalent mean squared errors indicating that the descriptive capacity of the data had been reached. However, the support vector machine was the clear winner in terms of computational complexity. As well, through its ability to determine its own dimensionality it is able to relate information about the physics of the problem back to the user. This thesis, contributes to the literature on three fronts. First, it demonstrates the concept of creating of a virtual sensor via sensor fusion. Second, in the remote-sensing field where focus has typically been on pattern classification tasks, this thesis provides an in-depth look at the use of neural networks for tough regression problems. And lastly, it provides a useful tool for the meteorological community in creating the ability to add large-scale, cloud-field information to predictive models.</p> / Doctor of Philosophy (PhD)
106

Performance Optimization of Hierarchical Memory Systems

Mekhiel, Nassief Nagi 09 1900 (has links)
<p>The gap between processor speeds and memory speeds is increasing. The performance of supercomputers and the scalability of multiprocessor systems is very dependent on the memory system speed.</p> <p>A cache system helps to narrow the processor/memory speed gap, but cannot completely decouple the processor from slow memory.</p> <p>The optimization of main memory performance and the use of a deep multilevel cache hierarchy are proposed here to bridge the processor/memory latency gap.</p> <p>A novel design that combines optimized bank interleaving with several main memory (DRAM) timing modes to increase memory performance is presented. Four different protocols based on this design are proposed and investigated.</p> <p>Enforcing the inclusion property for multi-level caches is proposed. A new design that uses three level caches is preserved and three different models are given.</p> <p>A design flow graph that makes the design of a multi-level memory system simpler and more flexible is introduced. Selected traces that match real workloads running on a wide range of computers are used to calculate realistic overall system performance.</p> / Doctor of Philosophy (PhD)
107

Model-based clustering algorithms, performance and application

Liu, Jun January 2000 (has links)
<p>The main contributions of this thesis are the development of new clustering algorithms (with cluster validation) both off-line and on-line, the performance analysis of the new algorithms and their applications to intrapulse analysis. Bayesian inference and minimum encoding inference including Wallace's minimum message length (MML) and Rissanen's minimum description length (MDL), are reviewed for model selection. It is found that the MML coding length is more accurate than the other two in the view of quantization. By introducing a penalty weight, all criteria considered here are cast into the framework of a penalized likelihood method. Based on minimum encoding inference, an appropriate measure of coding length is proposed for cluster validation, and the coding lengths under four different Gaussian mixture models are fully derived. This provides us with a criterion for the development of a new clustering algorithm. Judging from the performance comparison with other algorithms, the new clustering algorithm is more suitable to process high dimensional data with satisfactory performance on small and medium samples. This clustering algorithm is off-line because it requires all the data available at the same time. The theoretical error performance of our clustering algorithm is evaluated under reasonable assumptions. It is shown here how the dimension of data space, the sample size, the mixing portion and the inter-cluster distance affect the performance of our clustering algorithm to detect the true number of clusters. Furthermore, we examine the impact of the penalty weight under the framework of the penalized likelihood method. It is found that there is a range of the penalty weight within which the best performance of our clustering algorithm can be achieved. Therefore, with some supervision we could adjust the penalty weight to further improve the performance of our clustering algorithm. The application of our clustering algorithm to intrapulse analysis is investigated in detail. We first develop the pre-processing techniques including data compression for received pulses and formulated the problem of emitter number detection and pulse-emitter association into a multivariate clustering problem. After applying the above (off-line) clustering algorithm here, we further develop two on-line clustering algorithms, one is based on some known thresholds while the other is based on a model-based detection scheme. Performance on intrapulse data by using our pre-processing techniques and clustering algorithms is reported, and the results demonstrate that our new clustering algorithms are very effective for intrapulse analysis, especially the model-based on-line algorithm. Finally, the DSP implementation for intrapulse analysis is considered. Some relevant physical parameters are estimated such as the likely maximal incoming pulse rate. Then a suitable system diagram is proposed and its system requirements are investigated. Our on-line clustering algorithm is implemented as a core classification module on a TMS320C44 DSP board.</p> / Doctor of Philosophy (PhD)
108

Automated Detection of Treatment Responses with Cardiac Positron Tomography

deKemp, Anthony Robert January 1995 (has links)
<p>The physiology of the muscle of the heart (myocardium) can be studied regionally in the living body using positron tomographic techniques. Compounds are labelled with a positron emitting isotope, administered into the circulation, and their distribution in the body is measured with a positron tomograph. This enables the rates of myocardial blood flow and substrate metabolism to be measured regionally in the myocardium of the left ventricle. The accuracy and precision of these measurements are estimated to range from 2% to 15% in a given individual. Much of this variability is due to normal changes in physiological state, or to measurement noise associated with counting statistics. However, additional variability is associated with manual processing, which typically involves specification of the left ventricular myocardium of interest using an interactive visual display. An automated technique of analysis can remove this source of variance, and enable an unbiased evaluation of changes in response to the treatment of heart disease. Results can then be compared objectively between population samples or in single subjects studied under different treatment conditions. By performing appropriate statistical comparisons, a large volume of data is compressed into a form which can be interpreted in an efficient manner.</p> <p>An automated analysis technique is developed to remove the variability associated with manual processing. Reduction of the measurement variability increases the statistical power to detect physiological changes in the myocardium in different states of health and disease. The position and angular orientation of the left ventricle is determined directly from the measured dataset, so that the regional measurements can be analysed and presented in a standard format. The time course of radioactivity is determined for several hundred volume elements within the left ventricular myocardium, and for a blood region positioned in the ventricular cavity. Depending on the labelled compound and study protocol, various measures of myocardial physiology are computed from these two basic measurements. A technique is developed to compare treatment changes in physiology between population samples, by segmenting equivalent functional tissue regions from the left ventricular myocardium of the sample subjects. A method is also developed to evaluate the statistical significance of changes in single subjects, which may be useful to determine individual clinical responses to treatments. Comprehensive quality assurance outputs are generated, and are used to verify the analysis of all cardiac studies.</p> <p>The performance of the method was determined using cardiac phantom and human data. The position and angular orientation of the left ventricle were determined to within 2±2 mm and 3±3 degrees of the true values respectively. The detection of treatment changes in a population sample was demonstrated using measurements of myocardial blood flow with ¹³N-labelled ammonia. An increase in blood flow to diseased (ischemic) myocardium in response to nitroglycerin was demonstrated using a quantitative automated approach: a highly significant (p=0.01) difference of 16 ml/min/dg (20% of normal) was detected in a sample group of 14 subjects. The performance of the single subject analysis was verified by measuring the rate at which significant changes occured by chance, which was equal to the theoretical false positive rate. In an individual from the population sample, a significant increase (p=0.001) in myocardial blood flow of approximately 30% was detected in a region of known ischemia, in response to treatment with nitroglycerin. The statistical tools developed in this work can also be used to determine other sources of measurement variability. Reduction of these effects will increase further the power of positron tomography to evaluate objectively the clinical effects and mechanisms of action of cardiac treatments in health and disease.</p> / Doctor of Philosophy (PhD)
109

Tracking of Multiple Moving Targets by Passive Arrays and Asymptotic Performance Analysis

Zhou, Yifeng 03 1900 (has links)
<p>This thesis is focused on the topic of tracking multiple moving targets by combining the spatial and temporal information obtained by a passive array. For stationary sources a unified constrained subspace fitting approach for estimating the DOA's of spatially close source signals is presented. The algorithm is based on the Karhunen-Loève expansion of the covariance matrix of the array manifold in a sector of interest and searches for an optimal signal subspace over the array manifold space, which has minimum principal angles with the data signal subspace generated from the array data. This method is shown to be asymptotically consistent. Although this algorithm involves only one-dimensional researches, its performance is comparable to those in which multi-dimensional optimization is used.</p> <p>We propose a maximum likelihood approach for tracking moving targets by passive arrays. A locally linear model is used for the source target motion dynamics, and the target state is shown to be strongly observable. An MTS (multiple target state) vector is defined to describe the source target state. The maximum likelihood estimator is based on a batch of array data. The initial MTS is estimated as the maximizing point of the likelihood function of a batch of array data, and the subsequent MTS vectors are predicted by the target dynamics. Since the association problem is embedded in the estimation problem, the natural ordering of the target state is kept as long as the initial target DOA.'s can be successfully resolved by the array. To cope with difficulties involved in the nonlinear optimization process a modified Gauss-Newton algorithm is proposed in which the Hessian is approximated by a positive semi-definite matrix to guarantee that the algorithm is descent. The asymptotic performance analysis has also been fully investigated, We show that the ML estimates of the MTS variables are asymptotically consistent. We also derive explicit formulas for the asYIitptotic covariance and the Cramér-Rao bounds for the MTS estimates, and it is found that the ML estimator is relatively efficient. To show the effectiveness of the ML tracking technique we compare the asymptotic performance of the ML estimator with that of the extended Kalman filter (EKF). Its performance is superior to that of the EKF. Numerical results are provided to demonstrate the performance of the ML tracking technique via computer simulations.</p> / Doctor of Philosophy (PhD)
110

Nonlinear Adaptive Prediction of Nonstationary Signals and its Application to Speech Communications

Li, Liang January 1994 (has links)
<p>Prediction of a signal is synonymous with modeling of the underlying physical mechanism responsible for its generation. Many of the physical signals encountered in practice exhibit two distinct characteristics: nonlinearity and nonstationary. Consider, for example, the important case of speech signals. The production of a speech signal is known to be the result of a dynamic process that is both nonlinear and nonstationary. To deal with the nonstationary nature of signals, the customary practice is to invoke the use of adaptive filtering. Unfortunately, the nonlinear nature of the signal generation process has not received the attention which it deserves in that much of the literature on the prediction of speech signals has focused almost exclusively on the use of linear adaptive filtering schemes.</p> <p>This thesis is aimed at the study of nonlinear adaptive prediction of nonstationary signals using neural networks and its application to real-time speech communication. In this thesis, three basic questions are answered: 1) What kind of neutral networks are suited for real-time adaptive signal processing? 2) How can an adaptive neural network predictor be designed? 3) Can a neural network predictor be used for the application of real-time communication?</p> <p>In this thesis, a new Pipelined Recurrent Neural Network (PRNN) is designed. The PRNN is composed of M separate modules. The modules are identical, each designed as a recurrent neural network with a single output neuron. Information flow into and out of the modules proceeds in a synchronized fashion.</p> <p>A new scheme for the nonlinear adaptive prediction of nonstationary signals is proposed. The nonlinear neural network-based filter, which consists of a PRNN and a linear filter, learns to adapt to statistical variations of the incoming time series while, at the same time, the prediction is going on. The dynamic behavior of the pipelined recurrent neural network-based predictor is demonstrated in the case of several speech signals; for these applications it is shown that the nonlinear adaptive predictor outperforms the traditional linear adaptive scheme in a significant way. It should however, be emphasized that the nonlinear adaptive predictor has a much wider range of applications such as the adaptive prediction of sea clutter.</p> <p>The PRNN-based adaptive predictor is applied to the adaptive differential pulse code modulation. In the encoder and decoder parts of an ADPCM system, the predictor is successfully incorporated with an adaptive quantizer for low bit-rate speech communication. The research work involves a novel combination of pipelined recurrent neural network and a robust linear adaptive filter, and the design of a new 4-, 8- or 16-level adaptive quantizer. The nonlinear adaptive differential pulse code modulation algorithm is tested with different speech signals. The new algorithm is compared with a linear ADPCM algorithm, recommended by CCITT, from several aspects such as time domain, frequency domain, and listening tests. Speech experiments show that nonlinear adaptive differential pulse code modulation provides a promising new approach for high-quality communication at low bits rates.</p> / Doctor of Philosophy (PhD)

Page generated in 0.125 seconds