• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Nonlinear filtering of color images

Sartor, Lloyd J. 01 July 2000 (has links)
No description available.
392

An entropy based adaptive image encoding technique

Murphy, Gregory Paul 01 January 1990 (has links)
Many image encoders exist that reduce the amount of information that needs to be transmitted or stored on disk. Reduction of information reduces the transmission rate but compromises i~age quality. The encoders that have the best compression ratios often lose image quality by distorting the high frequency portions of the image. Other encoders have slow algorithms that will not work in real time. Encoders that use quantizers often exhibit a gray scale contouring effect due to insufficient quantizer levels. This paper presents a fast encoding algorithm that reduces the number of quantizer levels without introducing an error large enough to cause gray scale contouring. The new algorithm uses entropy to determine the most advantageous difference mapping technique and the number of bits per pixel used to encode the image. The double Difference values are reduced in magnitude such that an eight level power series quantizer can be used without introducing an error large enough to cause gray scale contouring. The one dimensional application of the algorithm results in 3.0 bits per pixel with a RMS error of 4.2 gray scale values. Applied two dimensionally, the algorithm reduces the image to 1.5 bits per pixel with a RMS error of 6.7 gray scale values.
393

Unitary suprathreshold color-difference metrics of legibility for CRT raster imagery

Lippert, Thomas M. January 1985 (has links)
This dissertation examined the relationships between color contrast and legibility for digital raster video imagery. CIE colorimetric components were combined into three-dimensional color coordinate systems whose coordinates map one-to-one with the physical energy parameters of all colors. The distance between any two colors' coordinates in these 3-spaces is termed Color-Difference (ΔE). ΔE was hypothesized as a metric of the speed (RS) with which observers possessing normal vision could accurately read random numeral strings of one color displayed against backgrounds of another color. Two studies totaling 32064 practice and experimental trials were conducted. The first study determined that the CIE Uniform Color Spaces are inappropriate for the modeling of RS. Subsequently, a different 3-space geometry and colorimetric component scaling were empirically derived from the Study 1 data to produce a one-dimensional ΔE scale which ” approximates an interval scale of RS. This ΔE scale and others were then applied to the different stimulus conditions in Study 2 to determine the generalizability of such ΔE metrics. The pair of studies is conclusive: several ΔE scales exist which serve equally well to describe or prescribe RS with multicolor CRT raster imagery for a range of character luminances in both positive and negative presentation polarities. These are the Y,u',v', logY,u',v', L*,u',v', and L*,u*,v* rescaled color spaces. Because of its predictive accuracy and simplicity, a luminance—generalized, ΔE—standardized Y,u',v' metric, accounting for 71% and 75% of the RS variability in Studies 1 and 2, respectively, is recommended as the most appropriate metric of emissive display legibility to be tested in these studies. / Ph. D.
394

Analyzing perspective line drawings using hypothesis based reasoning

Mulgaonkar, Prasanna Govind January 1984 (has links)
One of the important issues in the middle levels of computer vision is how world knowledge should be gradually inserted into the reasoning process. In this dissertation, we develop a technique which uses hypothesis based reasoning to reason about perspective line drawings using only the constraints supplied by the equations of perspective geometry. We show that the problem is NP complete, and that it can be solved using modular inference engines for propagating constraints over the set of world level entities. We also show that theorem proving techniques, with their attendant complexity, are not necessary because the real valued attributes of the world can be computed in closed form based only on the spatial relationships between world entities and measurements from the given image. / Ph. D.
395

Categorizing beef marbling scores using video image analysis

Danler, Robert Joseph. January 1985 (has links)
Call number: LD2668 .T4 1985 D36 / Master of Science
396

An investigation into multi-spectral tracking

Wood, Christiaan 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2005. / The purpose of this study was to investigate multi-spectral tracking. Various algorithms were investigated and developed to enhance the contrast between target and non-target classes. Different tracking algorithms were implemented on the resulting grayscale input. A physical tracking system consisting of a video input processor and DSP was designed and built to implement algorithms and investigate the viability of realtime multi-spectral tracking. It is illustrated that conventional intensity tracking clouds the available information and that by studying various spectral inputs information is extracted more efficiently from the available data.
397

Onboard image geo-referencing for LEO satellites

Van den Dool, Riaan 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2005. / The next generation of small satellites will require significant onboard data processing and information extraction capabilities to keep up with industry demands. The need for value added information products is growing as accessibility and user education is improving. Image geo-referencing is one of the image processing steps needed to transform raw images into usable information. Automating this process would result in a vast improvement in processing time and cost. As part of the background study, the imaging process is described and a model of the process is created. The sources of distortion that are present in the imaging process are described and techniques to compensate for them are discussed. One method that stands out is using wavelet analysis for the precision geo- referencing of images. Wavelets are used in this thesis for automatic ground control point identification. Finally, the automatic ground control point algorithm is used for band-alignment of a set of aerial images at sub pixel accuracy as a demonstration of the quality of ground control points that can be found.
398

Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.

Moodley, Deshendran. January 1996 (has links)
The use of computers for digital image recognition has become quite widespread. Applications include face recognition, handwriting interpretation and fmgerprint analysis. A feature vector whose dimension is much lower than the original image data is used to represent the image. This removes redundancy from the data and drastically cuts the computational cost of the classification stage. The most important criterion for the extracted features is that they must retain as much of the discriminatory information present in the original data. Feature extraction methods which have been used with neural networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and wavelets. These together with the Neocognitron which incorporates feature extraction within a neural network architecture are described and two methods, Zernike moments and the Neocognitron are chosen to illustrate the role of feature extraction in image recognition. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1996.
399

Event detection in surveillance video

Unknown Date (has links)
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%. / by Ricardo Augusto Castellanos Jimenez. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
400

Bioinformatics-inspired binary image correlation: application to bio-/medical-images, microsarrays, finger-prints and signature classifications

Unknown Date (has links)
The efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image in binary format is compared against a reference image so as to recognize the differential features that reside locally in the images being compared 2. SW and NW algorithms based binary comparison involves conversion of one-dimensional sequence alignment procedure (indicated traditionally for molecular sequence comparison adopted in bioinformatics) to 2D-image matrix 3. Relevant algorithms specific to computations are implemented as MatLabTM codes 4. Test-images considered are: Real-world bio-/medical-images, synthetic images, microarrays, biometric finger prints (thumb-impressions) and handwritten signatures. Based on the results, conclusions are enumerated and inferences are made with directions for future studies. / by Deepti Pappusetty. / Thesis (M.S.C.S.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.

Page generated in 0.172 seconds