Spelling suggestions: "subject:"electrical engineering"" "subject:"electrical ingineering""
531 |
Improved load balancing strategies for distributed computing systemsSelvam, S 05 1900 (has links)
Distributed computing systems
|
532 |
Calibration transfer methods for feedforward neural network based instrumentsCibere, Joseph John 01 January 2001 (has links)
The calibration transfer problem examined by this thesis is that of attempting to exploit the knowledge of an initial instrument calibration model so as to obtain a second similar model, having acceptable accuracy, with less data than that used to obtain the initial model. This thesis considers instruments whose calibration model are based on a feedforward neural network, FFNN. The recalibration of these instruments has raised concerns regarding the significant quantity of data needed to perform their recalibration. Calibration transfer methods provide an alternative to recalibration which can reduce the data needed for a recalibration while maintaining acceptable levels of calibration accuracy. Currently no reported methods of calibration transfer exist for the FFNN based instrument. This thesis develops a number of calibration transfer methods that allow a recalibration using less data than that needed in a recalibration employing conventional backpropagation learning. First, a simple non-learning method is introduced. A new method is developed based on a supervised learning algorithm employing a measure that learns the 'n'th order partial derivatives of the desired calibration model provided by the calibration data. Finally, a simple unreported method of initialising the weights of a FFNN so as to begin learning from a point on the error surface that provides the approximation of a previously obtained calibration model is described. Using computer simulations, the calibration error associated with using these calibration transfer methods are compared to the error obtained from a recalibration using conventional backpropagation learning. The simulations varied the numbers of neurons, number of calibration points, and similarity between calibration models. The desired calibration models were selected from 8th order polynomials and bandlimited normal random processes. The simulations indicated that no one method of calibration transfer provides the least calibration error but it is possible to achieve a 2 to 1000 fold decrease in the median calibration error relative to that of the standard recalibration while using half the calibration data. The results revealed that it is difficult to predict whether a specific set of calibration conditions will achieve a reduction in calibration error.
|
533 |
Generating system reliability optimizationChen, Hua 01 January 2000 (has links)
Reliability optimization can be applied in both conventional and non-conventional generating system planning. This thesis is concerned with generation adequacy optimization, with emphases on applications to wind energy penetration planning and interruptible load utilization. New models, indices and techniques for generation adequacy optimization including wind turbines and interruptible load utilization have been developed in this research work. A sequential Monte Carlo simulation technique for wind power modeling and reliability assessment of a generating system was developed in the research associated with optimum wind energy penetration planning. An auto-regressive and moving average (ARMA) time series model is used to simulate the hourly wind speeds. Two new risk-based capacity benefit indicators designated as the Load Carrying Capability Benefit Ratio (LCCBR) and the Equivalent Capacity Ratio (ECR) are introduced. These two indices are used to indicate capacity benefit and credit associated with a windenergy conversion system. A bisection technique to assess them was further developed. The problem of determining the optimum site-matching windturbine parameters was studied with the LCCBR and ECR as the optimization objective functions. Sensitivity studies were conducted to show the effect of wind energy penetration level on generation capacity benefit. A procedure for optimum penetration planning was formed, which extends the methods developed for conventional generation adequacy optimization. A basic framework and techniques to conduct interruptible load analysis using sequential Monte Carlo simulation were created in the research associated with interruptible load utilization. A new index designated as the Avoidable Additional Generating Capacity (AAGC) is introduced. Bisection search techniques were developed to effectively determine the Incremental Load Carrying Capability (ILCC) and AAGC. Case studies on suitable contractual options for interruptible load customers under given conditions are also presented in this thesis. The results show that selecting a suitable set of interruptible load contractual conditions, in which various risk conditions are well matched, will achieve enhanced interruptible load carrying capability or capacity benefits. The series of case studies described in this thesis indicate that the proposed concepts, framework, models and quantitative techniques can be applied in practical engineering situations to provide a scientific basis for generating system planning.
|
534 |
Growth, Characterization, and Modification of Vertically Aligned Carbon Nanotube Films for Use as a Neural Stimulation ElectrodeBrown, Billyde January 2010 (has links)
<p>Electrical stimulation is capable of restoring function to a damaged or diseased nervous system and can thereby improve the lives of patients in a remarkable way. For example, cochlear and retinal prostheses can help the deaf to hear and the blind to see, respectively. Improvements in the safety, efficacy, selectivity, and power consumption of these technologies require a long-term biocompatible, chemically and mechanically stable, low impedance neural electrode interface which can rapidly store high charge densities without damaging the electrode or neural tissue.</p><p> </p><p>In this study, vertically aligned multi-walled carbon nanotube films were synthesized and investigated for their potential use as a neural stimulation electrode. Materials characterization using electron microscopy, Raman, and x-ray photoelectron spectroscopy; and in-vitro electrochemical characterization using cyclic voltammetry, electrochemical impedance spectroscopy, and potential transient measurements were employed to determine material and electrochemical properties, respectively. Characterization was performed prior and subsequent to electrochemical and oxidative thermal treatments to determine if there were improvements in the desired properties.</p><p>The results indicated that electrochemical activation by potential cycling across the water window, a technique often used to activate and greatly improve the performance of iridium oxide electrodes, was also favorable for carbon nanotube (CNT) electrodes especially for thicker films. In addition, oxidative thermal treatments that did not significantly oxidize or etch the nanotubes caused a significant improvement in electrode performance. Phenomenological models were developed from these findings. Finally, growth of aligned CNTs using a platinum catalyst was demonstrated and suggested to reduce biocompatibility concerns due to otherwise highly toxic catalyst residue inherent in CNTs that may become bioavailable during chronic use.</p> / Dissertation
|
535 |
Shielded Metal Waveguides with Uniform Electric Field DistributionsZhou, Tao January 2010 (has links)
<p>This research focuses on achieving uniformly distributed electric field within a metal waveguide. A rectangular waveguide centrally loaded with a dielectric product is investigated rst since rectangular waveguides are widely used and can be easily made as exposure chambers and applicators. Then a dielectric slab loaded rectangular waveguide (TEM waveguide) with a uniform electric eld distribution across its cross-section is typically introduced. Due to the limitation of the TEM waveguide,</p><p>in this research, more practical methods are explored by changing the shape of the cross-section of a rectangular waveguide. The simulation results show that the new methods increase the uniform electric eld region greatly and even lower the cutoff frequency which means that a smaller waveguide may operate at the same frequency as a larger waveguide.</p> / Dissertation
|
536 |
Radar Space-Time Processing for Range-Folded Spread-Doppler Clutter MitigationLee, William Weiham January 2011 (has links)
<p>Pulsed radars have an ambiguous relationship between range and velocity which is proportional to the pulse repetition frequency (PRF), leading to potential tradeoffs. High PRFs are necessary to avoid velocity aliasing but suffer at the expense of unambiguous range. Obscuration due to ambiguous range foldover from distant clutter echos seriously degrades target detectability. For the case of skywave HF over-the-horizon (OTH) radar, ionospheric motion causes spreading of surface clutter in Doppler space and coupled with range folded clutter, introduces the effect of so-called 'separated' spread Doppler clutter (SDC). Selection of a nonrecurrent waveform (NRWF) with a quadratic phase interpulse code has been shown to mitigate long-range SDC by folding the multi-hop returns into known disassociated Doppler regions.</p><p>Utilizing multiple receive elements, spatial processing can be preformed to exploit the correlation between spatial frequency and Doppler shift produced by ionospheric winds. Adaptive beamforming is known to provide asymptotically optimal array gain if sufficient training data is available. In highly nonstationary environments however, obtaining this asymptotic performance is difficult as neither knowledge of the target wavefront nor signal-free training data is easily obtainable for training. A blind adaptive spatial processing (BASP) technique has been proposed, combining minimum variance (MV) adaptive beamforming and blind source separation (BSS). The unique idea of BASP is the formulation of a signal-free covariance matrix from BSS clutter and noise components at a single range bin, and utilizing it in adaptive beamforming to suppress clutter.</p><p>This research explores a clutter mitigation method that will combine NRWF and BASP in order to recover targets masked by Doppler-spread surface backscatter from points beyond the radar's maximum unambiguous range while maintaining target detectability elsewhere in Doppler. Current methods for mitigating range-folded clutter, such as reducing the pulse-repetition frequency or the use of non-recurrent waveforms, suffer loss in the usable Doppler space available for target detection or a reduction in target revisit rate. The proposed research uses BSS methods to exploit the known Doppler separation afforded by NRWF in order to estimate the spatial wavefront of the clutter across a linear receive array. Spatial adaptive processing using this estimated wavefront is then used to suppress range-folded clutter at each range bin without sacrificing the radar timeline or usable Doppler space.</p><p>Simulation is conducted to understand the NRWF and its ability to separate range-folded clutter in Doppler. The BASP method is applied to the NRWF and its results demonstrate performance improvement in terms of achievable signal-to-clutter and noise ratio gain. Laboratory experimental results show the NRWF's ability to separate range-folded clutter into designed Doppler regions. BASP is then applied and demonstrated to mitigate the separated range folded clutter and recover usable Doppler space.</p> / Dissertation
|
537 |
SEPARATION OF SPIKY TRANSIENTS IN EEG/MEG USING MORPHOLOGICAL FILTERS IN MULTI-RESOLUTION ANALYSISPon, Lin-Sen 18 July 2002 (has links)
Epileptic electroencephalographic (EEG) data often contains a large number of sharp spiky transient patterns which are diagnostically important. Background activity is the EEG activity representing the normal pattern from the brain. Transient activity manifests itself as any non-structured sharp wave with dynamically short appearance as distinguished from the background EEG. Generally speaking, the amplitude change of background activity varies slowly with time and spiky transient activity varies quickly with pointed peaks.
In this thesis, a method has been developed to automatically extract transient patterns based on morphological filtering in multiresolution representation. Using a simple structuring element (SE) to match a signal's geometrical shape, mathematical morphology is applied to detect the differences of morphological characteristics of signals. If a signal contains features consistent with the geometrical feature of the structuring element, a morphological filter can recognize and extract the signal of interest. The multiresolution scheme can be based on the wavelet packet transform which decomposes a signal into scaling and wavelet coefficients of different resolutions. The morphological separation filter is applied to these coefficients to produce two subsets of coefficients for each coefficient sequence: one representing the background activity and the other representing the transients. These subsets of coefficients are processed by the inverse wavelet transform to obtain the transient component and the background component. Alternatively, a morphological lifting scheme has been proposed for separation these two components. Experimental results on both synthetic data and real EEG data have shown that the developed methods are highly effective in automatic extraction of spiky transients in the epileptic EEG data.
The interictal spike trains thus extracted from multiple electrode recordings are further analyzed. Their cross-correlograms are examined according to the stochastic point process model. Our experiment result has been verified by human experts' estimation.
|
538 |
Signal Decomposition Into Primitive Known Signal ClassesGalati, David G 19 April 2002 (has links)
The detection of simple patterns such as impulses, steps, and ramps in signals is a very important problem in many signal processing applications. The human eye is a very effective filter and hence is capable of performing this task very efficiently. In applications where no humans are involved in the signal interpretation process, this task needs to be performed by a computer.
In this thesis, we propose and investigate two novel algorithms to automate this task. Our starting point is a discrete signal composed of an unknown number of ramps, steps, and impulses with unknown magnitudes and delays as well as random noise. We propose two different criteria based on minimum energy and minimum complexity to decompose the signal into these basic simple patterns. The solutions based on these criteria are proposed and examined. Over all, the minimum complexity criterion seems to produce results that are more similar to the human eyes interpretation then the minimum energy approach.
|
539 |
An Improved Evaluation Method for Airplane Simulator Motion CueingMarodi, Alex 30 April 2002 (has links)
The lack of sufficient evaluation criteria for motion systems has contributed to perceivable differences in motion cues among similar airplane simulators. To resolve this issue, criteria for simulator motion cueing and evaluation must be developed to insure uniform and optimum cues within a motion system's workspace. Therefore, an improved evaluation method is proposed to enable a better assessment of motion cueing within the workspace.
To demonstrate the effectiveness of the improvement, an off-line simulation of a motion system is developed and used for the evaluation. A common motion cueing algorithm is incorporated in the simulation to control a motion platform model. Test signals that approximate typical airplane specific forces, for selected maneuvers, are used to drive the simulation. During each simulation test run, a platform trajectory is recorded for the maneuver. The trajectory data are then processed by an optimization routine that determines the dynamic workspace limits as a function of the trajectory. The time histories of the trajectory and the workspace limits are then plotted for evaluation.
Presenting the platform trajectory along with the dynamic workspace limits provides another way of evaluating the quality of motion cues within the workspace. Augmenting the existing motion criteria that are used in current evaluation methods with criteria based on the dynamic workspace limits yields an improved evaluation method. This improved evaluation method contributes to the development of criteria for motion evaluation.
|
540 |
A Uniform Mathematical Representation of Logic and Computation.Bandi, Mahesh M 04 September 2002 (has links)
The current models of computation share varying levels of correspondence with actual implementation schemes. They can be arranged in a hierarchical structure depending upon their level of abstraction. In classical computing, the circuit model shares closest correspondence with physical implementation, followed by finite automata techniques. The highest level in the abstraction hierarchy is that of the theory of computation.
Likewise, there exist computing paradigms based upon a different set of defining principles. The classical paradigm involves computing as has been applied traditionally, and is characterized by Boolean circuits that are irreversible in nature. The reversible paradigm requires invertible primitives in order to perform computation. The paradigm of quantum computing applies the theory of quantum mechanics to perform computational tasks.
Our analysis concludes that descriptions at lowest level in the abstraction hierarchy should be uniform across the three paradigms, but the same is not true in case of current descriptions. We propose a mathematical representation of logic and computation that successfully explains computing models in all three paradigms, while making a seamless transition to higher levels of the abstraction hierarchy. This representation is based upon the theory of linear spaces and, hence, is referred to as the linear representation. The representation is first developed in the classical context, followed by an extension to the reversible paradigm by exploiting the well-developed theory on invertible mappings. The quantum paradigm is reconciled with this representation through correspondence that unitary operators share with the proposed linear representation. In this manner, the representation is shown to account for all three paradigms. The correspondence shared with finite automata models is also shown to hold implicitly during the development of this representation. Most importantly, the linear representation accounts for the Hamiltonians that define the dynamics of a computational process, thereby resolving the correspondence shared with underlying physical principles.
The consistency of the linear representation is checked against a current existing application in VLSI CAD that exploits the linearity of logic functions for symbolic representation of circuits. Some possible applications and applicability of the linear representation to some open problems are also discussed.
|
Page generated in 0.1038 seconds