• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2631
  • 1200
  • 1200
  • 1200
  • 1200
  • 1200
  • 1197
  • 459
  • 178
  • 30
  • 8
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 5163
  • 5163
  • 5153
  • 529
  • 391
  • 271
  • 265
  • 141
  • 123
  • 101
  • 93
  • 92
  • 83
  • 76
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Image processing through vector quantization

Panchapakesan, Kannan January 2000 (has links)
Vector quantization (VQ) is an established data compression technique. It has been successfully used to compress signals such as speech, imagery, and video. In recent years, it has been employed to perform various image processing tasks such as edge detection, classification, and volume rendering. The advantage of using VQ depends on the specific task but usually includes memory gain, computational gain, or the inherent compression it offers. Nonlinear interpolative vector quantization (NLIVQ) was introduced as an approach to overcome the curse of dimensionality incurred by an unconstrained, exhaustive-search VQ, especially, at high rates. In this dissertation, it is modified to accomplish specific image processing tasks. VQ-based techniques are introduced to achieve the following image processing tasks. (1) Blur identification: VQ encoder distortion is used to identify image blur. The blur is estimated by choosing from a finite set of candidate blur functions. A VQ codebook is trained on images corresponding to each candidate blur. The blur in an image is then identified by choosing from the candidates, the one corresponding to the codebook that provides the lowest encoder distortion. (2) Superresolution: Images obtained through a diffraction-limited optical system do not possess any information beyond a certain cut-off frequency and are therefore limited in their resolution. Superresolution refers to the endeavor of improving the resolution of such images. Superresolution is achieved through an NLIVQ trained on pairs of original and blurred images. (3) Joint compression and restoration: Combining compression and restoration in one step is useful from the standpoints of memory and computing needs. An NLIVQ is suggested for this purpose that performs the restoration entirely in the wavelet transform domain. The training set for VQ design consists of pairs of original and blurred images. (4) Combined compression and denoising: Compression of a noisy source is a classic problem that involves the combined efforts of compression and denoising (estimation). A robust NLIVQ technique is presented that first identifies the variance of the noise in an image and subsequently performs simultaneous compression and denoising.
412

New aspects of digital color image enhancement

Thomas, Bruce Allen January 1999 (has links)
The spatial and chromatic dimensions of digital color image information exhibit unique interrelationships that invite new, color-image-specific, processing strategies. A quantitative exploration of these interrelationships is performed. The resulting data reveals key traits that lead to two new methods of color image enhancement. The first is a method of color image contrast enhancement that exploits the existence of high-pass spatial energy in certain chromatic color components. We present a new, spatially adaptive approach that acknowledges the spatially varying nature of cross-component correspondences. This new approach is suitable for use in any color space. The second is a method of color image denoising that exploits the unique correspondences of polychromatic multiscale edges at fine scales of analysis. The multiscale edges are derived using wavelet methods. This approach preserves image details and noticeably outperforms wavelet thresholding methods of denoising in images containing natural foliage.
413

Technical advances in volume holographic memories

King, Brian Michael January 2001 (has links)
Volume holographic memories (VHMs) are a candidate technology for next-generation high-density and high data-rate digital storage. Capacities greater than 1 Terabit are promised, available at read-out rates exceeding 1 Gigabit per second. The capacity target will be achieved through two mechanisms. First, retrieval in a VHM reconstructs a holographic page (a two-dimensional image) captured on a CCD (charge-coupled device) camera. Each page represents on the order of one million bits of data, by encoding the data as bright and dark pixels in the 1024 x 1024 stored/retrieved image. Second, due to the thickness of the recording medium, a large number of such pages can be recorded in the same volume of material. In this dissertation we address some of the difficult technical issues that either currently limit the VHM system design, or are expected to become a limiting factor in the future. The first such concern involves how to process the simultaneous optical arrival of one million pixels. In high-density storage, there will be significant cross-talk between pixels which limits the storage capacity. We develop a novel highly-parallel focal-plane processor, which can significantly improve the system capacity by performing reliable detection in the presence of optical blur and alignment errors introduced by the imaging system. A fabricated proof-of-concept VLSI design is described. Another fundamental noise source is caused by the cross-talk between holographic pages. Reconstruction of the desired data page reconstructs every page in the memory, albeit at a very low relative diffraction efficiency. As the number of multiplexed pages increases, the cross-talk from the other pages can constitute a significant optical field noise source. Apodization seeks to either suppress this noise source or control it such that system tolerances can be relaxed. Bright data pixels are stored by altering the material properties of the crystal. However, dark pixels require no adjustment to the crystal; they are implicitly stored. This asymmetric storage cost drives a capacity improvement by biasing the data pages to contain more dark pixels and fewer bright pixels. An increased number of pages can be stored at the same reconstruction fidelity. We propose a novel modulation code to encode and decode these sparse data pages. Experimental results are presented showing the improvement in capacity. If the data page is composed of non-binary or grayscale pixels, then a further capacity enhancement is possible. The previous binary modulation code is extended into an arbitrary grayscale modulation code and a low-complexity maximum-likelihood decoder is developed as well as a mathematical proof of correctness. Extensive experimental results verify that the proposed method is practical and offers a substantial capacity improvement.
414

Field programmable analog array synthesis

Wang, Haibo January 2002 (has links)
Field programmable analog arrays (FPAAs), the analog counterparts of digital field programmable gate arrays (FPGAs), are suitable for prototyping analog circuits and implementing dynamically re-configurable analog systems. Although various FPAA architectures have been recently developed, very little work has been reported in the area of design automation for field programmable analog arrays. The lack of sophisticated FPAA synthesis tools is becoming one of the key limitations toward fully exploiting the advantages of FPAAs. To address this problem, this dissertation presents a complete synthesis flow that can automatically translate abstract-level analog function descriptions into FPAA circuit implementations. The proposed synthesis flow consists of function decomposition, macro-cell synthesis, placement & routing, and post-placement simulation subroutines. The function decomposition subroutine is aimed at decomposing high-order analog functions into low-order sub-functions. This not only increases the accuracy of the realized analog functions, but also reduces the routing complexity of the synthesized circuits. The macro-cell synthesis subroutine generates circuit implementations for the decomposed sub-functions. Then, FPAA placement & routing is performed to map the synthesized analog circuits onto FPAA chips. The final stage of the synthesis flow is post-placement simulation, which is used to verify that the synthesized circuits meet performance specifications. The major contributions of this dissertation are techniques developed for implementing the FPAA synthesis flow. In the work of function decomposition, we developed theoretical proofs for two optimization criteria that were previously used to search optimal function decomposition solutions. In addition, we developed more efficient procedures to search optimal function decomposition solutions. To implement the macro-cell synthesis subroutine, we proposed a modified signal flow graph to represent FPAA circuits. Graph transformations are introduced for exploring alternative circuit structures in FPAA synthesis. Finally, in the work of FPAA placement and routing, an efficient method for estimating FPAA parasitic effects was developed. The effectiveness of the developed techniques is demonstrated by the experiments of synthesizing various FPAA circuits. The proposed synthesis methodologies will significantly simplify the use of FPAAs, and consequently make FPAAs more appealing in analog design.
415

Wavelet domain image restoration and super-resolution

Goda, Matthew January 2002 (has links)
Multi-resolution techniques, and especially the wavelet transform provide unique benefits in image representation and processing not otherwise possible. While wavelet applications in image compression and denoising have become extremely prevalent, their use in image restoration and super-resolution has not been exploited to the same degree. One issue is the extension 1-D wavelet transforms into 2-D via separable transforms versus the non-separability of typical circular aperture imaging systems. This mismatch leads to performance degradations. Image restoration, the inverse problem to image formation, is the first major focus of this research. A new multi-resolution transform is presented to improve performance. The transform is called a Radially Symmetric Discrete Wavelet-like Transform (RS-DWT) and is designed based on the non-separable blurring of the typical incoherent circular aperture imaging system. The results using this transform show marked improvement compared to other restoration algorithms both in Mean Square Error and visual appearance. Extensions to the general algorithm that further improve results are discussed. The ability to super-resolve imagery using wavelet-domain techniques is the second major focus of this research. Super-resolution, the ability to reconstruct object information lost in the imaging process, has been an active research area for many years. Multiple experiments are presented which demonstrate the possibilities and problems associated with super-resolution in the wavelet-domain. Finally, super-resolution in the wavelet domain using Non-Linear Interpolative Vector Quantization is studied and the results of the algorithm are presented and discussed.
416

Importance sampling for LDPC codes and turbo-coded CDMA

Xia, Bo January 2004 (has links)
Low-density parity-check (LDPC) codes have shown capacity-approaching performance with soft iterative decoding algorithms. Simulating LDPC codes at very low error rates normally takes an unacceptably long time. We consider importance sampling (IS) schemes for the error rate estimation of LDPC codes, with the goal of dramatically reducing the necessary simulation time. In IS simulations, the sample distribution is biased to emphasize the occurrence of error events and efficiency can be achieved with properly biased sample distributions. For LDPC codes, we propose an IS scheme that overcomes a difficulty in traditional IS designs that require codebook information. This scheme is capable of estimating both codeword and bit error rates. As an example, IS gains on the order of 105 are observed at a bit error rate (BER) of 10-15 for a (96, 48) code. We also present an importance sampling scheme for the decoding of loop-free multiple-layer trees. This scheme is asymptotically efficient in that, for an arbitrary tree and a given estimation precision, the required number of simulations is inversely proportional to the noise standard deviation. The motivation of this study is to shed light on an asymptotically efficient IS design for LDPC code simulations. For an example depth-3 regular tree, we show that only 2400 simulation runs are needed to achieve a 10% estimation precision at a BER of 10-75. Similar promising results are also shown for a length-9 rate-1/3 regular code after being converted to a decoding tree. Finally, we consider a convolutionally coded CDMA system with iterative multiuser detection and decoding. In contrast to previous work in this area, a differential encoder is inserted to effect an interleaver gain. We view the CDMA channel as a periodically time-varying ISI channel. The receiver jointly decodes the differential encoders and the CDMA channel with a combined trellis, and shares soft output information with the convolutional decoders in an iterative (turbo) fashion. Dramatic gains over conventional convolutionally coded systems are demonstrated via simulation. We also show that there exists an optimal code rate under a bandwidth constraint. The performance and optimal code rates are also demonstrated via density evolution analysis.
417

An extended cavity, self focussing laser optical head

Partow, Sepehr, 1965- January 1991 (has links)
A feasibility study of an "Extended Cavity, Self Focussing Laser Optical Head" for optical data storage applications is presented. A general description of the proposed device is discussed followed by a prediction of its dynamic operation. This is verified by a one dimensional computer model, simulating dynamic laser head behavior. Transient laser phenomena such as longitudinal mode competition and laser frequency modulation are investigated as applicable to the device operation. The self-focussing concept is confirmed by the passive cavity experiment and a geometrical computer model of the cold cavity (i.e. no gain medium).
418

Nonlinear self-focus of pulsed-wave beams in Kerr media

Judkins, Justin Boyd, 1967- January 1992 (has links)
A modified finite-difference time-domain method for solving Maxwell's equations in nonlinear media is presented. This method allows for a finite response time to be incorporated in the medium, physically creating dispersion and absorbtion mechanisms. Our technique models electromagnetic fields in two space dimensions and time, and encompasses both the TEz and TMz set of decoupled field equations. Aspects of an ultra-short pulsed Gaussian beam are studied in a variety of linear and nonlinear environments to demonstrate that the methods developed here can be used efficaciously in the modeling of pulses in complex problem space geometries even when nonlinearities are present.
419

Electrical characterization and plasma impedance measurements of a RF plasma etch system

Roth, Weston Charles, 1970- January 1995 (has links)
A modified Tegal MCR-1 plasma etch system has been electrically characterized, and the plasma impedance has been measured at 13.56MHz. Important aspects of radio-frequency (RF) impedance measurements are addressed as they pertain to the measurement of the plasma impedance. These include: transmission line effects, magnitude and phase errors of the measurement probes, and the intrinsic impedance of the empty plasma chamber. Plasma harmonics are discussed, and a technique for measuring the plasma impedance at harmonic frequencies is presented. Transients in the plasma impedance are observed during the first 5 minutes after the plasma is initiated, and represent a decrease in the plasma impedance. Residual gas analysis (RGA) confirms the presence of H₂O in the plasma. The H₂O ion current measured by RGA shows a downward transient similar to the impedance transients, suggesting a possible relationship between H₂O and the impedance transients. A possible explanation for these impedance transients is presented.
420

Design of an optimum driver circuit for CW laser diodes

Hajiaghajani, Kazem, 1955- January 1992 (has links)
A poorly designed drive circuit leads at best to unstable optical output power and/or frequency and at worst to permanent damage to the laser diode. Thermal stress on the laser diode junction and noise from various sources degrade the diode's performance and may result in its damage, and transients may destroy the laser diode outright. This thesis explores all failure mechanisms of a laser diode, and offers solutions to prevent and/or controll them. General drive circuit considerations and requirements of various demanding applications of a CW laser diode are discussed. Finally, a fully functional drive circuit is presented.

Page generated in 0.1375 seconds