• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Quadtree-based processing of digital images

Naderi, Ramin 01 January 1986 (has links)
Image representation plays an important role in image processing applications, which usually. contain a huge amount of data. An image is a two-dimensional array of points, and each point contains information (eg: color). A 1024 by 1024 pixel image occupies 1 mega byte of space in the main memory. In actual circumstances 2 to 3 mega bytes of space are needed to facilitate the various image processing tasks. Large amounts of secondary memory are also required to hold various data sets. In this thesis, two different operations on the quadtree are presented. There are, in general, two types of data compression techniques in image processing. One approach is based on elimination of redundant data from the original picture. Other techniques rely on higher levels of processing such as interpretations, generations, inductions and deduction procedures (1, 2). One of the popular techniques of data representation that has received a considerable amount of attention in recent years is the quadtree data structure. This has led to the development of various techniques for performing conversions and operations on the quadtree. Klinger and Dyer (3) provide a good bibliography of the history of quadtrees. Their paper reports experiments on the degree of compaction of picture representation which may be achieved with tree encoding. Their experiments show that tree encoding can produce memory savings. Pavlidis [15] reports on the approximation of pictures by quadtrees. Horowitz and Pavidis [16] show how to segment a picture using traversal of a quadtree. They segment the picture by polygonal boundaries. Tanimoto [17] discusses distortions which may occur in quadtrees for pictures. Tanimoto [18, p. 27] observes that quadtree representation is particularly convenient for scaling a picture by powers of two. Quadtrees are also useful in graphics and animation applications [19, 20] which are oriented toward construction of images from polygons and superpositiofis of images. Encoded pictures are useful for display, especially if encoding lends itself to processing.
372

Characterization and Modeling of Nonlinear Dark Current in Digital Imagers

Dunlap, Justin Charles 14 November 2014 (has links)
Dark current is an unwanted source of noise in images produced by digital imagers, the de facto standard of imaging. The two most common types of digital imager architectures, Charged-Coupled Devices (CCDs) and Complementary Metal-Oxide-Semiconductor (CMOS), are both prone to this noise source. To accurately reflect the information from light signals this noise must be removed. This practice is especially vital for scientific purposes such as in astronomical observations. Presented in this dissertation are characterizations of dark current sources that present complications to the traditional methods of correction. In particular, it is observed that pixels in both CCDs and CMOS image sensors produce dark current that is affected by the presence of pre-illuminating the sensor and that these same pixels produce a nonlinear dark current with respect to exposure time. These two characteristics are not conventionally accounted for as it is assumed that the dark current produced will be unaffected by charge accumulated from either illumination or the dark current itself. Additionally, a model reproducing these dark current characteristics is presented. The model incorporates a moving edge of the depletion region, where charge is accumulated, as well as fixed recombination-generation locations. Recombination-generation sites in the form of heavy metal impurities, or lattice defects, are commonly the source of dark current especially in the highest producing pixels, commonly called "hot pixels." The model predicts that pixels with recombination-generation sites near the edge of an empty depletion region will produce less dark current after accumulation of charge, accurately modeling the behavior observed from empirical sources. Finally, it is shown that activation energy calculations will produce inconsistent results for pixels with the presence of recombination-generation sites near the edge of a moving depletion region. Activation energies, an energy associated with the temperature dependence of dark current, are often calculated to characterize aspects of the dark current including types of impurities and sources of dark current. The model is shown to generate data, including changing activation energy values, that correspond with changing activation energy calculations in those pixels observed to be affected by pre-illumination and that produce inconsistent dark current over long exposure times. Rather than only being a complication to dark current correction, the presence of such pixels, and the model explaining their behavior, presents an opportunity to obtain information, such as the depth of these recombination-generation sites, which will aid in refining manufacturing processes for digital imagers.
373

Automatic visual inspection of solder joints

Merrill, Paul A. January 1984 (has links)
No description available.
374

A study of Hadamard transform, DPCM, and entropy coding techniques for a realizable hybrid video source coder /

Blumenthal, Robert E. January 1986 (has links)
No description available.
375

A Method for Automatic Synthesis of Aged Human Facial Images

Gandhi, Maulin R. January 2004 (has links)
Note:
376

A Cellular Algorithm for Data Reduction of Polygon Based Images

Caesar, Robert James 01 January 1988 (has links) (PDF)
ABSTRACT The amount of information contained in an image is often much more than is necessary. Computer generated images will always be constrained by the computer's resources or the time allowed for generation. To reduce the quantity of data in a picture while preserving its apparent quality can require complex filtering of the image data. This paper presents an algorithm for reducing data in polygon based images, using different filtering techniques that take advantage of a priori knowledge as to the images' content. One technique uses a novel implementation of vertex elimination. By passing the image through a sequence of controllable filtering stages, the image is segmented into homogeneous regions, simplified, then reassembled. The amount of data representing the picture is reduced considerably while a high degree of image quality is maintained. The effects of the different filtering stages will be analyzed with regard to data reduction and picture quality as it relates to flight simulation imagery. Numeri­ cal results are included in the analysis. Further applications of the algorithm are discussed as well.
377

Three-level block truncation coding

Lee, Deborah Ann 01 January 1988 (has links) (PDF)
Block Truncation Coding (BTC) techniques, to date, utilize a two-level image block code. This thesis presents and studies a new BTC method employing a three-level image coding technique. This new method is applied to actual image frames and compared with other well-known block truncation coding techniques. The method separates an image into disjoint, rectangular regions of fixed size and finds the highest and lowest pixel values of each. Using these values, the image block pixel value range is calculated and divided into three equal sections. The individual image block pixels are then quantized according to the region into which their pixel value falls; they are quantized to a 2 if they fall in the upper third , a 1 in the middle third, and a O if in the lower third range region. Thus, each pixel now requires only two bits for transmission. This is one bit per pixel more than the other well-known BTC techniques and thus it has a smaller compression ratio. When the BTC techniques were applied to actual images, the resulting 3LBTC reconstructed images had the smallest mean-squared-error of the techniques applied. It also produced favorable results in terms of the entropy of the reconstructions as compared to the entropy of the original images. The reconstructed images were also very good replicas of the originals and the 3LBTC process had the fastest processing speed. For applications where coding and reconstruction speed are crucial and bandwidth is not critical, the 3LBTC provides an image coding solution.
378

Photo-graft : a critical analysis of image manipulation

Gavard, Sandra. January 1999 (has links)
No description available.
379

Optical music recognition using projections

Fujinaga, Ichiro January 1988 (has links)
No description available.
380

Comparison of accuracy and efficiency of five digital image classification algorithms

Story, Michael Haun 12 April 2010 (has links)
Accuracies and efficiencies of five algorithms for computer classification of multispectral digital imagery were assessed by application to imagery of three test sites (Roanoke, VA., Glade Spring, VA., and Topeka, KA.) A variety of land cover features and two types of image data (Landsat MSS and Thematic Mapper) were represented. Classification algorithms were selected from the General Image Processing System (GIPSY) at the Spatial Data Analysis Laboratory at Virginia Polytechnic Institute and State University, Blacksburg, Virginia and represent a range of available techniques including: a) AMOEBA (an unsupervised clustering technique with a spatial constraint) b) ISODATA (a hybrid minimum distance classifier) c) BOXDEC (a discrete parallelepiped classifier) d) BCLAS (a Bayesian classifier) e) HYPBOX (a combined parallelepiped-Bayesian classifier) Two sets of training data, developed for each study site were combined with each technique and applied to each study site. Parallelepiped classifiers provided the highest classification accuracies but failed to categorize all pixels. The number of classified pixels could be altered by the method of selecting training data and/or adjusting the threshold variable. The minimum distance classifiers were most accurate when the spectral sub-class training data were used. Use of the land cover class training data provided the most accurate results for the Bayesian techniques and decreased the CPU requirements for all of the techniques. The most important consideration for accurate and efficient classification is to select the classification algorithm that matches the data structure of the training data. / Master of Science

Page generated in 0.1517 seconds