• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3519
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 62
  • 4
  • Tagged with
  • 6062
  • 6062
  • 6062
  • 561
  • 518
  • 475
  • 372
  • 351
  • 283
  • 260
  • 237
  • 232
  • 187
  • 184
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Field Simulation and Calibration in External Electro-Optic Sampling

Wu, Xiaohua 07 1900 (has links)
<p>Electro-optic (E-O) sampling is capable of measuring internal node response of microwave and high speed devices and circuits with minimal invasiveness up to terahertz frequency range or in the picosecond domain. Unfortunately, the accuracy of E-O sampling is still not comparable to a conventional network analyzer due to lack of a generl calibration technique. Therefore, a general and systematic calibration technique is demanded for quantitative measurement using E-O sampling. In this thesis, a full wave time domain field analysis technique, called the Finite-Difference Time-Domain (FD-TD) method, has been applied to the external E-O sampling problem. Using this theoretical simulation model, field disturbances in external E-O sampling have been investigated, calibration method of external E-O sampling developed and the optimum probing design techniques suggested. Field disturbances, (i.e. invasiveness and distortion) in external E-O sampling have been examined quantitatively, for the first time, by means of field and wave simulation. The results suggest that probes introduce little invasiveness if they are removed from contact by a finite distance which depends on dimensions of the device being tested. The sampled signal distortion introduces considerable error at high frequencies or when sub-picosecond pulses are involved. The FD-TD method has been successfully, applied, for the first time, to external E-O sampling problem and combined with electro-optic tensor to yield electro-optic response in E-O sampling. The probe transfer function has been then derived to characterize probe specifications. It provides a practical means to quantitatively investigate the operational frequency limit of any given probe. A field based calibration technique has been developed to de-embed both invasiveness and distortion using the full wave field modeling and the probe transfer function which can be found by field simulation or measurement results. It has been found that each specific robe has an intrinsic transfer function primarily determined by probe dimensions and space between the probe face and the device being tested. The impact of different probe materials, different sampling beam positions, and different probe dimensions with respect to the device being tested (i.e. the electric field orientation) has been systematically evaluated for the first time. The results provide necessary information for engineers to properly design probes and system set-up in order to achieve optimum E-O sampling results. It has been shown that LiTaO₃ probes are favored over GaAs when the sensitivity is the major concern. In contrast, GaAs probes are preferred when the accuracy in high frequencies up to several hundred gigahertz is of primary interest. In addition, sampling near the leading edge of probes is preferred in external E-O sampling to minimize the distortion induced in the measured results. The research conducted in this thesis can be straightforwardly extended to direct and hybrid E-O sampling problems. Further research and development will lead this field based calibration technique in E-O sampling to more general devices such as microwaves monolithic circuits (MMICs).</p> / Doctor of Philosophy (PhD)
82

Neural Networks for Data Fusion

Wang, Fengzhen January 1997 (has links)
<p>The process of data fusion may be viewed as a multi-level hierarchical inference process whose ultimate goal is to assess a mission situation and identify, localize and analyze threats. Each succeeding level of data fusion processing deals with a greater level of abstraction. The lower level is concerned solely with individual objects where sensor data are processed to derive the best estimates of current and future positions for each hypothesized object as well as provide an inference as to the identity and key attributes of the objects. With the recent proliferation and increasing sophistication of new technologies. It is recognized that the incorporation of new techniques, such as neural networks and others, will make the data fusion system more powerful in tri-service (command, control, and communications). In this thesis, optimization neural networks are investigated. A new technique of measurement data association in a multi-target radar environment is developed. The technique is based on the mean-field-theory machine and has the advantages of both the Hopfield network and the Boltzmann machine. In the technical development, three new energy functions have been created. Theoretically, the critical annealing temperature is found to determine the annealing temperature range. A convergence theorem for the mean-field-theory machine is put forward. Based on the technique, neural data association capacities have been evaluated in cases with and without clutter, taking into account different accuracies for radar measurements. New energy functions have been extended to multiple dimensional data association. A comprehensive analysis by computer simulations has demonstrated that the new technique developed here possesses high association capacity in the presence of false alarms; it can cope with track-crossing in a dense target environment. A feature-mapping neural network for centralized data fusion is presented, and its performance is compared with that of the Maximum Likelihood approach. In support of our study of multisensor data fusion for airborne target classification with artificial neural networks (ANNs), we designed a neural classifier. Multilayer perceptrons neural networks trained by back-propagation (BP) rule are discussed. In order to speed up the training or decrease the number of epoches in the learning process, both momentum and adaptive learning rate methods are used. The simulation results show that the technique of automatic target classification using neural networks has the potential to classify targets.</p> / Doctor of Philosophy (PhD)
83

Modeling and Design of Photonic Crystal Waveguides and Fibers

Shen, Linping 09 1900 (has links)
<p>Phonetic crystal waveguides and fibers are emerging waveguides that are formed based on relatively large-scale periodic dieletric materials, also known as the photonic band-gap materials. Modeling and simulation of such waveguide structures will help to gain understanding for the modal and transmission characteristics and their dependence on the key design and operation parameters. In this dissertation, the multilayer slab and circular photonic crystal waveguides are investigated theoretically with emphasis on their modal characteristics and transmission properties relevant to broad-band telecommunication systems and networks. Key performance parameters (e.g., the modal field, the modal effective index, the group-velocity dispersion, the confinement loss, the mode effective area, as well as the confinement factor, etc.) are simulated and analyzed by using both analytical and numerical methods. For the sake of completeness, a comprehensive review of the different mathematical methods for simulation and analysis of optical waveguides in general and photonic crystal waveguides in particular is presented. The theoretical frameworks for rigorous methods such as the finite difference method and the plane wave expansion method and for approximate methods such as the effective index method and the envelope approximate method are discussed, and their merits and shortcomings in modeling and analysis of photonic crystal waveguides and fibers are examined in great detail. The one-dimensional (1D) slab photonic crystal waveguides (PCWs) are the simplest to model and analyze, yet can offer deep insight into the salient features of photonic crystal waveguides and fibers. A somewhat exhaustive study for the modal properties of 1D PCWs is carried out with the help of the rigorous transfer matrix method. Four different guiding regimes due to the total internal reflection (TIR) and the photonic band-gap (PBG) are recognized, and their unique features are revealed and discussed. Further, scope of validity and level of accuracy for two insightful approximate methods (i.e., the effective index method and the envelope approximation method) are examined in detail by comparison with the exact solutions. Furthermore, new results about the effects of the number of unit cells (i.e., layer-pairs), the layer size-to-pitch-raio, and the core thickness on the modal properties are obtained and discussed. The two-dimensional (2D) photonic crystal waveguides such as the air-hole-filled photonic crystal fibers (PCFs) find more practical applications and also much more difficult to model and analyze. In this context, the modal analyses with different theoretical frameworks such as the scalar, semi-vector, and full-vector formulations are presented and discussed with the help of the finite difference method. It is demonstrated that the vector nature of the guided modes of the PCFs needs to be considered in analyzing the modal characteristics such as the dispersion. Based on the band structure of 2D photonic crystals, modal characteristics of the PBG-PCFs and TIR-PCFs are obtained and their physical behaviors are easy to explain. Also one new parameter is proposed to judge the single-mode operation of the PCFs, and the bending loss of the PCFs is calculated by the numerical method for the first time. Furthermore, the effects of finite number of air holes and size of interstitial holes on modal properties of the PCFs are investigated. Some scaling transformations of modal properties related to the design parameters of the waveguide structures are derived. Based on the rigorous analysis model and scaling transformations for the modal properties, a general procedure for design and optimization of the PCFs with desired modal properties is proposed. In comparison with the conventional design method, the new design procedure is more efficient and can be readily automated for the purpose of design optimization. Several applications of the design procedure (e.g., the design optimization for the dispersion shifted fibers, the dispersion flattened fibers, and the dispersion compensation fibers) are demonstrated.</p> / Doctor of Philosophy (PhD)
84

Two-dimensional Computer Vision far Inspection and Robotics

Capson, David January 1984 (has links)
<p>The thesis concerns binary vision for non-contact inspection and robotics applications in flexible manufacturing. Two-dimensional silhouettes of three-dimensional objects are processed to measure a number of features including area, perimeter, circularity, maximum radius, moments, and number of holes. This information is then used to identify and locate randomly positioned objects to provide visual feedback for part acquisition by an industrial robot or inspection tasks that might include checking dimensional tolerances, verification of hole placement, and the like.</p> <p>Silhouettes are encoded in the system as a linked list of "vertex points" representing changes of direction on the contour. An algorithm and data structures for extraction of vertex points from a raster-scanned binary image have been developed. The method operates sequentially and no restrictions are imposed on the number or topology of silhouettes in each frame. Several existing contour tracing algorithms are reviewed; the new algorithm is shown to offer considerable improvement in execution time at the expense of a small increase in memory. It is also demonstrated that vertex point approximations use significantly fewer points than run-length segment representations.</p> <p>System implementation is based on a 232 x 240 CID camera, an 8086/8087 single-board microcomputer, and two custom-built boards. The architecture features a multiple bus configuration designed for high speed, parallel operation of dedicated modules: grey-level histogram generation and binary image acquisition are at video-rates. The complete software is in EPROM, and includes feature extraction for recognition and location of silhouettes encoded as vertex point lists, a data base for prototype objects and a comprehensive command set.</p> <p>The software also includes communication with a PUMA 600 robot via a serial interface as well as extensions to its VAL language; in this way the robot accesses visual information under program control. An example is given of parts sorting using the extended instruction set.</p> <p>The results of a large number of statistical measurements are used to establish overall system performance.</p> / Doctor of Philosophy (PhD)
85

Advances in Wideband Array Signal Processing Using Numerical Bayesian Methods

Ng, William 09 1900 (has links)
<p>This thesis focuses on joint model order detection and estimation of the parameters of interest, with applications to narrowband and wideband array signal processing in both off-line and on-line contexts. A novel data model that is capable of handling both narrowband and wideband cases with the use of an interpolation function and signal samples is proposed. In the off-line mode, Markov Chain Monte Carlo methods are applied to obtain a numerical approximation of the joint posterior distribution of the parameters under the condition that they have stationary distribution functions. On the other hand, if the distribution functions are nonstationary, the on-line approach is used. That approach employs a sequential implementation of Monte Carlo methods, applied to probabilistic dynamic systems. Four inter-related problems were addressed in the course of this thesis. 1. A new data structure based on interpolation functions and signal samples to approximate wideband signals was developed. This data model, after appropriate transformation, has similar features found in the conventional narrowband data model. Furthermore, as the novel data model is developed for the wideband scenario, it can also address the narrowband scenario without change of structure or parameters. This novel data model is the basis on which the MCMC and the SMC approaches solve the array signal processing problems developed in the subsequent chapters. 2. The first algorithm presents an advanced approach using sequential MC methods to beamforming for narrowband signals in white noise with unknown variance. Traditionally, beamforming techniques assume that the number of sources is given and the signal of interest (or target) is stationary within an observation period. However, in reality these two assumptions are commonly violated. The former assumption can be dealt with by jointly estimating the number of sources, whereas the latter severely limits the usefulness of conventional beamforming techniques when the target is indeed moving. In the case where the sources are moving, tracking the incident angles of the sources are required, and the accuracy of such tracking significantly affects the performance of signal separation and recovery, which is the objective of beamforming. The proposed method is capable of recursively estimating the time-varying number of sources as well as incident angles of the sources as new data arrive such that the signal amplitudes can be separated and restored in an on-line fashion. 3. The second algorithm presents an application of MCMC methods for the joint detection and estimation problem for the wideband scenario in white noise with unknown variance. In general, compared to the narrowband scenario, it is more difficult and cumbersome to solve this array signal processing problem in the wideband context. Conventional approaches tend to solve this problem in the frequency domain, and as such require a considerable amount of data to sustain accuracy, which imposes a large computational burden for these approaches. Furthermore, these approaches employ separate algorithms like AIC and MDL to estimate the number of sources. In contrast, the proposed method utilizes the reversible jump MCMC technique that simultaneously detects the number of sources and estimates the parameter of interest within the same algorithm. The proposed method is applied to the novel data model mentioned earlier and solves the problem in the time domain, which significantly reduces the requirement for a large number of data samples. 4. The final algorithm is an extension of the off-line approach to wideband array signal processing problem using sequential MC methods. Most conventional array signal processing approaches are developed under the assumption that the sources are stationary in direction of arrival. If this assumption is invalid, the solutions from these approaches become suboptimal and their performance is significantly degraded. When sources are nonstationary, tracking the motions of the sources is needed, but in wideband scenarios the same problem becomes more difficult and cumbersome than in narrowband scenario because the methods for wideband scenarios usually require a considerable amount of data for processing. The proposed algorithm focuses on the sequential implementation of particle filters for probabilistic dynamic systems. This algorithm is applied to the modified novel data structure mentioned earlier in white noise with unknown variance for recursive estimation of the motions of the sources as new data arrive. A systematic statistical testing procedure is used to keep track of the number of sources.</p> / Doctor of Philosophy (PhD)
86

Blind Signal Separation

Lu, Jun 04 1900 (has links)
<p>This thesis addresses the blind signal separation (BSS) problem. The essence of the BSS problem is to recover a set of source signals from a group of sensor observations. These observations can be modeled as instantaneous or convolutive mixtures of the sources. Accordingly, the BSS problem is known as blind separation of instantaneously mixed signals or blind separation of convolutively mixed signals. In this thesis, we tackle both problems. For blind separation of instantaneously mixed signals, we first cast the separation problem as an optimization problem using mutual information based criterion, and solve it with an extended Newton's method on the Stiefel manifold. Then, for a special case in which the sources are constant modulus (CM) signals, we formulate the separation problem a constrained minimization problem utilizing the constant modulus property of the signal. Again, we solve it using the Newton's method on the Stiefel manifold. For the problem of blind separation of convolutively mixed signals, which is also known as blind deconvolution problem, we first propose a time domain method. We cast the separation problem as an optimization problem using a mutual information based criterion and solve it using a sequential quadratic programming (SQP) method. Then, we propose a set of higher-order statistics (HOS) based criteria for blind deconvolution. We also discuss the relationship of our proposed criteria and other HOS based criteria. We then propose a frequency domain HOS based blind channel identification approach. In this approach, we identify the channel frequency response by jointly diagonalizing a set of so called polyspectrnm matrices. Finally, we propose a second-order statistics (SOS) based method for blind channel identification. Assuming the channel inputs are cyclostationary signals, we identify the channel frequency response through the singular value decomposition (SVD) of a cyclic cross-spectrum based matrix. Numerical simulations are used throughout this thesis to compare our proposed methods with other methods from the literature and to demonstrate the validity and competitiveness of our proposed methods.</p> / Doctor of Philosophy (PhD)
87

Artificial Intelligence Techniques Applied to Fault Detection Systems

Fischer, Daniel 04 1900 (has links)
<p>This thesis presents novel applications of Artificial Intelligence-based on algorithms to Failure Detection Systems. It shows the benefits intelligent, adaptive blocks can provide along with potential pitfalls. A new fault detection structure is introduced which has desirable properties when dealing with missing data, or data corrupted by extraneous disturbances. A classical alarm generation procedure is extended by transformation into an optimum, real-time, adaptive block. Two techniques, artificial Neural Networks, and Partial Least Squares, complement each other in one of the failure detection applications exploiting their respective non-linear and de-correlation strengths. Artificial Intelligence techniques are compared side by side with classical approaches and the results are analyzed. Three practical examples are examined: The Static Security Assessment of Electric Power Systems, the Oil Leak Detection in Underground Power Cables, and the Stator Overheating Detector. These case studies are demonstrated since each one represents a class of failure detection problems. The Static Security Assessment of Electric Power Systems in a class of problems with inputs which are somewhat correlated, and which has very little learning data. While the time required for the system to learn is not a concern, the recall time must be short, providing for real-time performance. The Oil Leak Detection in Underground Power Cables represents the class of problems where one has vast amounts of data indicative of a properly functioning system, however data from a failed system are very sparse. Unlike the Static Security Assessment problem, the oil leak detector has to consider the time dynamics of the system. Special provisions must be made to accommodate missing data which would interrupt contiguous data sets required for proper operation. This case study shows ways to exploit the slight sensor redundancy in order to detect sensor breakdown along with the detection of the main system failure. A third class of problems is showcased by the Electric Generator Stator Overheating detector. This application must deal with highly correlated inputs, along with the lack of fault data to be used for learning. Physical system non-linearities as well as time dynamics must also be addressed.</p> / Doctor of Philosophy (PhD)
88

Electrode geometry effects on the collection efficiency of submicron and ultrafine dust particles in wire-plate electrostatic precipitators

Brocilo, Drazena 08 1900 (has links)
<p>Recent interest in emission control of fine particulate matter has resulted from scientific studies on the effect of fine particulate matter on human health. Hence, many western countries introduced a new emission regulation known as PM2.5, that regulates particles less than 2.5 microns in diameter. The existing particle separation devices such as electrostatic precipitators (ESPs) are of particular interest since they can economically capture particles effectively with a low pressure drop. The present ESPs provide high collection efficiencies of around 99.99% for micron and larger particles. However, the collection efficiency of submicron particles in the range from 0.1 to 1 ~m and ultrafine particles, that is with particle diameters less than 0.1 Jlm, can be less than 50%. In this work, numerical and experimental studies were conducted to examine the effect of electrode geometries on the improvement of collection efficiency of submicron and ultrafine dust particles in electrostatic precipitators. The collection efficiency prediction was based on a modified Deutsche's equation after calculation of the three-dimensional electric potential and ion distribution. The particle charging models for diffusion and field charging methods were considered, based on the Knudsen number (Kn=2A/dp), where Ai is the mean free path of negative ions and dp is the dust particle diameter. The constitutive relationship developed from the optical emission experiments was implemented to simulate ion distribution of corona discharge for various discharge electrodes. Experimental validations for total and partial collection efficiencies for particle size from 10-2 to 20 mm were conducted for bench and full scale ESPs. Results show that the collection efficiency of submicron and ultrafine particles can be predicted with good accuracy for various geometries of discharge and dust collection electrodes. The spike-type discharge electrode with the I-type collecting electrode improves collection efficiency of fine particles when compared to the wire or rod discharge electrode with I-type collecting electrode. In the case of U and C-type collecting electrodes, there is an optimum fin length for which the highest collection efficiency can be reached. Comparison of experimental and predicted results shows that the total collection efficiency predicted by the present model agrees well with experimental results for the bench-scale ESPs. For the large-scale wire-plate type ESP, the present simulation results conducted for various gas temperatures and dust resistivities agree quantitatively and qualitatively with the experimental results. The model proved to be useful for prototype design of collecting and discharge electrodes, modification and existing ESP's and scale-up of new ESP's in order to meet new emission regulations.</p> / Doctor of Philosophy (PhD)
89

Precision Synchronization of Navigation Satellites in Inclined Orbit From an Earth Station

Sabry, Ibrahim Ehab 11 1900 (has links)
<p>The possibility of establishing accurate timing on board a navigation satellite in inclined orbit using a timing reference from either an earth station or geostationary satellite has been an important task in the last fifteen years. More recently, there has been considerable effort placed in designing and developing the NAVSTAR system which uses atomic clocks on-board satellites in inclined orbits to establish accurate time.</p> <p>In this thesis we discuss another possible mode of operation which is based on the transponding of timing information from an earth station to a navigation satellite in inclined orbit through a satellite in geostationary orbit. Assuming that the satellite in the geostationary orbit has constant space delay with an earth station, then the only change in the space delay between the earth station and the satellite in inclined orbit occurs between the two satellites.</p> <p>The main advantages of our solution to this problem are:</p> <p>1) The atomic clocks on board the navigation satellite are no longer required.</p> <p>2) A communication link now exists between the earth station and all users of the system through the navigation satellite. This is due to the fact that the satellite in geostationary orbit has the capability of observing both the navigation satellite in inclined orbit and the earth station located inside its coverage area.</p> <p>We assume the following:</p> <p>1) The location of geostationary satellite is accurately known. This is usually true since its motion with respect to earth stations is small.</p> <p>2) The space delay from earth station to geostationary satellite can be determined to within less than 1 ns using conventional timing methods in TDMA. Thus, accurate time at the geostationary satellite is established.</p> <p>3) The distance between the geostationary satellite and the navigation satellite varies smoothly with time.</p> <p>4) The location of the navigation satellite is known to lie within a sphere of certain radius centred at a known point.</p> <p>If we calculate in advance the actual space delay between a geostationary satellite and a navigation satellite; and know the timing on-board the geostationary satellite, we can establish timing on-board the navigation satellite. The technique which we use in computing the uplink and downlink delays between the two satellites depends on the estimation of the navigation satellite location. The error in estimating the location of the navigation satellite does not affect the calculation of the space delay between the two satellites directly, but its effect will be reflected by the change in the space delay.</p> <p>The computed results show that the estimated uplink and downlink space delay between the two satellites can be calculated to a high degree of accuracy (a fraction of a nanosecond or less). Thus it appears that this system could be practical especially for commercial use which may include communications links as well as navigation information.</p> / Doctor of Philosophy (PhD)
90

Model Based Synchronization of Monitoring and Control Systems

Storoshchuk, Lev Orest 09 1900 (has links)
This research identifies and provides novel solutions to challenges that arise when implementing an Agile Manufacturing Information Systems (AMIS). This research constitutes a body of original work in that it has successfully integrated aspects of real-time systems control, computer science, electrical engineering, knowledge capture and technology insertion to address the issues faced when developing an AMIS. They include: 1. The divergence between manufacturing control and monitoring systems, addressed by complimenting the current University of Michigan research on domain models and developing a novel mapping from the model to PLC Ladder Logic. 2. Development of the code, heretofore lacking sufficient application of scientific methodology, by providing a method for integrating components into a control plan, capturing and representing a control engineer's knowledge in a generalized abstract model and removing the need for the specialized knowledge. 3. The lack of systems context provided by current monitoring systems enhanced by developing an application generator, based on the generalized abstract model, which automatically creates synchronized control code and information needed by the monitoring system. 4. An expressed lack of experimentation in the software community to transition technology from the theoretical domain to practice in industry is addressed through an experimental effort with industry, where the theoretical methodology was successfully transferred to practice in the manufacturing community. An experimental environment was developed to allow other researchers to replicate the experiments and to continue to extend the research. / Doctor of Philosophy (PhD)

Page generated in 0.1231 seconds