Spelling suggestions: "subject:"image processing - 4digital techniques."" "subject:"image processing - deigital techniques.""
371 |
Characterization and Modeling of Nonlinear Dark Current in Digital ImagersDunlap, Justin Charles 14 November 2014 (has links)
Dark current is an unwanted source of noise in images produced by digital imagers, the de facto standard of imaging. The two most common types of digital imager architectures, Charged-Coupled Devices (CCDs) and Complementary Metal-Oxide-Semiconductor (CMOS), are both prone to this noise source. To accurately reflect the information from light signals this noise must be removed. This practice is especially vital for scientific purposes such as in astronomical observations.
Presented in this dissertation are characterizations of dark current sources that present complications to the traditional methods of correction. In particular, it is observed that pixels in both CCDs and CMOS image sensors produce dark current that is affected by the presence of pre-illuminating the sensor and that these same pixels produce a nonlinear dark current with respect to exposure time. These two characteristics are not conventionally accounted for as it is assumed that the dark current produced will be unaffected by charge accumulated from either illumination or the dark current itself.
Additionally, a model reproducing these dark current characteristics is presented. The model incorporates a moving edge of the depletion region, where charge is accumulated, as well as fixed recombination-generation locations. Recombination-generation sites in the form of heavy metal impurities, or lattice defects, are commonly the source of dark current especially in the highest producing pixels, commonly called "hot pixels." The model predicts that pixels with recombination-generation sites near the edge of an empty depletion region will produce less dark current after accumulation of charge, accurately modeling the behavior observed from empirical sources.
Finally, it is shown that activation energy calculations will produce inconsistent results for pixels with the presence of recombination-generation sites near the edge of a moving depletion region. Activation energies, an energy associated with the temperature dependence of dark current, are often calculated to characterize aspects of the dark current including types of impurities and sources of dark current. The model is shown to generate data, including changing activation energy values, that correspond with changing activation energy calculations in those pixels observed to be affected by pre-illumination and that produce inconsistent dark current over long exposure times.
Rather than only being a complication to dark current correction, the presence of such pixels, and the model explaining their behavior, presents an opportunity to obtain information, such as the depth of these recombination-generation sites, which will aid in refining manufacturing processes for digital imagers.
|
372 |
Automatic visual inspection of solder jointsMerrill, Paul A. January 1984 (has links)
No description available.
|
373 |
A study of Hadamard transform, DPCM, and entropy coding techniques for a realizable hybrid video source coder /Blumenthal, Robert E. January 1986 (has links)
No description available.
|
374 |
A Method for Automatic Synthesis of Aged Human Facial ImagesGandhi, Maulin R. January 2004 (has links)
Note:
|
375 |
A Cellular Algorithm for Data Reduction of Polygon Based ImagesCaesar, Robert James 01 January 1988 (has links) (PDF)
ABSTRACT The amount of information contained in an image is often much more than is necessary. Computer generated images will always be constrained by the computer's resources or the time allowed for generation. To reduce the quantity of data in a picture while preserving its apparent quality can require complex filtering of the image data. This paper presents an algorithm for reducing data in polygon based images, using different filtering techniques that take advantage of a priori knowledge as to the images' content. One technique uses a novel implementation of vertex elimination. By passing the image through a sequence of controllable filtering stages, the image is segmented into homogeneous regions, simplified, then reassembled. The amount of data representing the picture is reduced considerably while a high degree of image quality is maintained. The effects of the different filtering stages will be analyzed with regard to data reduction and picture quality as it relates to flight simulation imagery. Numeri cal results are included in the analysis. Further applications of the algorithm are discussed as well.
|
376 |
Three-level block truncation codingLee, Deborah Ann 01 January 1988 (has links) (PDF)
Block Truncation Coding (BTC) techniques, to date, utilize a two-level image block code. This thesis presents and studies a new BTC method employing a three-level image coding technique. This new method is applied to actual image frames and compared with other well-known block truncation coding techniques.
The method separates an image into disjoint, rectangular regions of fixed size and finds the highest and lowest pixel values of each. Using these values, the image block pixel value range is calculated and divided into three equal sections. The individual image block pixels are then quantized according to the region into which their pixel value falls; they are quantized to a 2 if they fall in the upper third , a 1 in the middle third, and a O if in the lower third range region. Thus, each pixel now requires only two bits for transmission. This is one bit per pixel more than the other well-known BTC techniques and thus it has a smaller compression ratio.
When the BTC techniques were applied to actual images, the resulting 3LBTC reconstructed images had the smallest mean-squared-error of the techniques applied. It also produced favorable results in terms of the entropy of the reconstructions as compared to the entropy of the original images. The reconstructed images were also very good replicas of the originals and the 3LBTC process had the fastest processing speed. For applications where coding and reconstruction speed are crucial and bandwidth is not critical, the 3LBTC provides an image coding solution.
|
377 |
Photo-graft : a critical analysis of image manipulationGavard, Sandra. January 1999 (has links)
No description available.
|
378 |
Optical music recognition using projectionsFujinaga, Ichiro January 1988 (has links)
No description available.
|
379 |
Comparison of accuracy and efficiency of five digital image classification algorithmsStory, Michael Haun 12 April 2010 (has links)
Accuracies and efficiencies of five algorithms for computer classification of multispectral digital imagery were assessed by application to imagery of three test sites (Roanoke, VA., Glade Spring, VA., and Topeka, KA.) A variety of land cover features and two types of image data (Landsat MSS and Thematic Mapper) were represented. Classification algorithms were selected from the General Image Processing System (GIPSY) at the Spatial Data Analysis Laboratory at Virginia Polytechnic Institute and State University, Blacksburg, Virginia and represent a range of available techniques including:
a) AMOEBA (an unsupervised clustering technique with a spatial constraint)
b) ISODATA (a hybrid minimum distance classifier)
c) BOXDEC (a discrete parallelepiped classifier)
d) BCLAS (a Bayesian classifier)
e) HYPBOX (a combined parallelepiped-Bayesian classifier)
Two sets of training data, developed for each study site were combined with each technique and applied to each study site.
Parallelepiped classifiers provided the highest classification accuracies but failed to categorize all pixels. The number of classified pixels could be altered by the method of selecting training data and/or adjusting the threshold variable. The minimum distance classifiers were most accurate when the spectral sub-class training data were used. Use of the land cover class training data provided the most accurate results for the Bayesian techniques and decreased the CPU requirements for all of the techniques.
The most important consideration for accurate and efficient classification is to select the classification algorithm that matches the data structure of the training data. / Master of Science
|
380 |
Median filtering for target detection in an airborne threat warning systemHavlicek, Joseph P. January 1988 (has links)
Detection of point targets and blurred point targets in midwave infrared imagery is difficult because few assumptions can be made concerning the characteristics of the background. In this thesis, real time spatial prefiltering algorithms that facilitate the detection of such targets in an airborne threat warning system are investigated. The objective of prefiltering is to pass target signals unattenuated while rejecting background and noise. The use of unsharp masking with median filter masking operators is recommended. Experiments involving simulated imagery are described, and the performance of median filter unsharp masking is found to be superior to that of the Laplacian filter, the linear point detection filter, and unsharp masking with a mean filter mask.
A primary difficulty in implementing real time median filters is the design of a mechanism for extracting local order statistics from the input. By performing a space-time transformation on a standard selection network, a practical sorting architecture for this purpose is developed. A complete hardware median filter unsharp masking design with a throughput of 25.6 million bits per second is presented and recommended for use in the airborne threat warning system. / Master of Science
|
Page generated in 0.1305 seconds