Spelling suggestions: "subject:"image processingdigital techniques"" "subject:"image professions.digital techniques""
211 |
An examination of two algorithms for digital image registrationCyran, Edward Joseph January 1983 (has links)
Two digital image registration algorithms are tested and evaluated on a personal computer system. The two algorithms, correlation and sequential similarity detection, are tested and evaluated for speed of execution, accuracy, and optimum parameter determination.
The programs are written in BASIC and can be easily converted to FORTRAN or other high-level language.
Three different polynomial functions are tested and evaluated to improve the resolution of the correlation determination.
Based on the results of the tests, it was concluded that the testing of registration algorithms is feasible in small computer systems and that sequential similarity detection is faster. An optimum threshold setting can be determined for an individual image. Increasing the magnitude of the SSDA threshold parameter increases the execution time of the SSDA program. Also, the resolution of the correlation can be improved with a curve fitting technique. / Master of Engineering
|
212 |
Improved Gain Stability of a Digital Imager Using a Charge Feedback AmplifierMylott, Elliot Eckman 11 June 2015 (has links)
Digital imagers including Charge-Coupled Devices (CCD) are essential to most forms of modern photographic technologies. The quality of the data produced by digital imagers have made them an invaluable scientific measurement tool. Despite the numerous advantages of digital imagers, there are still factors that limit their performance. One such factor is the stability of the camera's gain, the ratio that dictates the imager's ability to convert incident photons to a measurable output voltage. Variations in gain can affect the linearity of the device and produce inaccurate measurements.
One of the factors that determines the gain of the camera is the sensitivity of the output amplifier. The purpose of this study is to compare the performance of two different output amplifier structures: the traditional source follower (SF) and the charge feedback amplifier (CFA). In studies of other solid state detectors, the CFA has shown a greater stability against variations in certain system parameters and environmental conditions such as operating temperature. It is thought that the CFA shows a superior stability over the SF, because the gain of the SF is dependent on multiple capacitances associated with the reset and output transistors, whereas the CFA gain is only dependent on its feedback capacitance. Furthermore, the CFA is able to handle a larger amount of charge than the SF, which increases the dynamic range of the output amplifier.
In this research, output amplifier stability is measured using gain and linearity data collected from a CCD manufactured with both types of amplifiers. Preliminary data is presented that indicates the CFA exhibits a greater linearity, larger dynamic range, and a more stable gain than the SF. Despite this the CFA suffers from a significantly larger level of noise. Suggestions for future research are also given as to how to verify and expand upon the results presented here.
|
213 |
A comparative study of the performance of various image analysis methods for dimensional inspection with vision systemsKoeppe, Ralf 01 January 1989 (has links)
Dimensional inspection with Vision Systems requires a careful selection of image analysis methods in order to obtain accurate information about the geometry of the parts to be measured.
The purpose of this project is to study, implement and compare different image evaluation methods and to show their strengths and weaknesses with respect to dimensional inspection. Emphasis is made on the inspection of circular features. The criteria of comparison for these methods are discussed. Using synthetically generated images, various analysis methods are compared and conclusions for their use are drawn. Results of the comparison show that the selection of a method has to be done with regard to the noise level of the measurement. Finally, a computationally fast calibration algorithm is studied and implemented .
|
214 |
Photographic transformations and greyscale picturesPhillips, Carlos. January 2006 (has links)
No description available.
|
215 |
A relational picture editor /Düchting, Bernhard. January 1983 (has links)
No description available.
|
216 |
Iterative algorithms for fast, signal-to-noise ratio insensitive image restorationLie Chin Cheong, Patrick January 1987 (has links)
No description available.
|
217 |
Human performance evaluations of selected image enhancement/restoration techniquesChao, Betty P. January 1983 (has links)
Recently, the number of digital imaging systems incorporated into information display applications, such as military and industrial aerial reconnaissance, has increased rapidly. These imaging systems provide considerable flexibility for the processing and enhancement of information that otherwise might be unnoticed in conventional imaging systems. Many of the digital enhancement techniques, however, have not been subjected to systematic evaluations to examine their influence upon operationally relevant human task performance. This paper reports the findings of a segment of an ongoing research program designed to establish a digital image database, to standardize a set of experimental procedures of obtaining human performance data, and to relate these performance measures to various image display conditions.
The image database consists of low-altitude aerial scenes of various military and. civilian installations. Original transparencies were digitized with a microdensitometer to generate the image database for magnetic tape storage. The digitized images were then degraded by blur and noise to simulate various levels of system resolution and system signal-to-noise ratio, respectively.
Two experimental tasks were developed to assess the effects of digital image quality upon human performance characteristics of interest to the military reconnaissance community. An information extraction task required the human observers to answer a series of questions pertaining to the essential elements of information with each image. A subjective rating task required observers to estimate the extent of image interpretability.
Using military photointerpreters as subjects, studies were conducted to assess the effects of image degradations (blur and noise) and image enhancement/restoration processing on human performance. The studies employed high-resolution, black-and-white CRT monitors to display the digital images. Results indicated that both blur and noise image degradations impaired interpretability of the imagery and that several enhancement/restoration processing techniques substantially improved interpretability of the imagery. These results provide useful information for users of digital imaging systems and for researchers to aid future developments of digital image processes. / Ph. D.
|
218 |
Design of a real time digital image correlatorKhan, Safi S. January 1993 (has links)
This thesis presents the design of a real time digital image correlator circuit The circuit accepts images in real time from two video sources, with one video source serving as a reference with which images from the second video source are compared. The proposed circuit extracts a reference window from the reference image and a horizontal band from the input image, and performs real time image cross-correlation on the two extracted portions. The resulting cross-correlation values are shifted out of the circuit as they are computed. The size of the reference window, the size of the image pairs, and the vertical offset of the reference window and input band are externally selectable. The circuit has been modeled in VHDL and simulated using the SYNOPSYS VHDL Simulator. This thesis also presents a proposed implementation of the circuit. / M.S.
|
219 |
An optical/digital incoherent image processing system for an extended depth of fieldMotamedi, Masoud January 1985 (has links)
A severely defocused incoherent system has isolated zeros in its optical transfer function(OTF); therefore, an exact inverse filtering cannot be performed. It has been established that, by using an annular aperture in an optical system, the depth of focus can be extended. Isolated zeros in the OTF can therefore be avoided by choosing an annular aperture with a proper radius ratio. However, in the process of increasing the depth of focus of the system, this method results in a loss of image contrast. An annular pass filter can be used to restore this loss in contrast. A simple hybrid optical/digital image processing system in which a TV camera is coupled with an annular aperture is considered. The annular-pass filtering to compensate for the loss of contrast is performed by a digital computer. The experimental results are presented. / M.S.
|
220 |
Video Categorization Using Semantics and SemioticsRasheed, Zeeshan 01 January 2003 (has links) (PDF)
There is a great need to automatically segment, categorize, and annotate video data, and to develop efficient tools for browsing and searching. We believe that the categorization of videos can be achieved by exploring the concepts and meanings of the videos. This task requires bridging the gap between low-level content and high-level concepts (or semantics). Once a relationship is established between the low-level computable features of the video and its semantics, the user would be able to navigate through videos through the use of concepts and ideas (for example, a user could extract only those scenes in an action film that actually contain fights) rat her than sequentially browsing the whole video. However, this relationship must follow the norms of human perception and abide by the rules that are most often followed by the creators (directors) of these videos. These rules are called film grammar in video production literature. Like any natural language, this grammar has several dialects, but it has been acknowledged to be universal. Therefore, the knowledge of film grammar can be exploited effectively for the understanding of films. To interpret an idea using the grammar, we need to first understand the symbols, as in natural languages, and second, understand the rules of combination of these symbols to represent concepts. In order to develop algorithms that exploit this film grammar, it is necessary to relate the symbols of the grammar to computable video features.
In this dissertation, we have identified a set of computable features of videos and have developed methods to estimate them. A computable feature of audio-visual data is defined as any statistic of available data that can be automatically extracted using image/signal processing and computer vision techniques. These features are global in nature and are extracted using whole images, therefore, they do not require any object detection, tracking and classification. These features include video shots, shot length, shot motion content, color distribution, key-lighting, and audio energy. We use these features and exploit the knowledge of ubiquitous film grammar to solve three related problems: segmentation and categorization of talk and game shows; classification of movie genres based on the previews; and segmentation and representation of full-length Hollywood movies and sitcoms.
We have developed a method for organizing videos of talk and game shows by automatically separating the program segments from the commercials and then classifying each shot as the host's or guest's shot. In our approach, we rely primarily on information contained in shot transitions and utilize the inherent difference in the scene structure (grammar) of commercials and talk shows. A data structure called a shot connectivity graph is constructed, which links shots over time using temporal proximity and color similarity constraints. Analysis of the shot connectivity graph helps us to separate commercials from program segments. This is done by first detecting stories, and then assigning a weight to each story based on its likelihood of being a commercial or a program segment. We further analyze stories to distinguish shots of the hosts from those of the guests. We have performed extensive experiments on eight full-length talk shows (e.g. Larry King Live, Meet the Press, News Night) and game shows (Who Wants To Be A Millionaire), and have obtained excellent classification with 96% recall and 99% precision. http://www.cs.ucf.edu/~vision/projects/LarryKing/LarryKing.html
Secondly, we have developed a novel method for genre classification of films using film previews. In our approach, we classify previews into four broad categories: comedies, action, dramas or horror films. Computable video features are combined in a framework with cinematic principles to provide a mapping to these four high-level semantic classes. We have developed two methods for genre classification; (a) a hierarchical method and (b) an unsupervised classification met hod. In the hierarchical method, we first classify movies into action and non-action categories based on the average shot length and motion content in the previews. Next, non-action movies are sub-classified into comedy, horror or drama categories by examining their lighting key. Finally, action movies are ranked on the basis of number of explosions/gunfire events. In the unsupervised method for classifying movies, a mean shift classifier is used to discover the structure of the mapping between the computable features and each film genre. We have conducted extensive experiments on over a hundred film previews and demonstrated that low-level features can be efficiently utilized for movie classification. We achieved about 87% successful classification. http://www.cs.ucf.edu/-vision/projects/movieClassification/movieClmsification.html
Finally, we have addressed the problem of detecting scene boundaries in full-length feature movies. We have developed two novel approaches to automatically find scenes in the videos. Our first approach is a two-pass algorithm. In the first pass, shots are clustered by computing backward shot coherence; a shot color similarity measure that detects potential scene boundaries (PSBs) in the videos. In the second pass we compute scene dynamics for each scene as a function of shot length and the motion content in the potential scenes. In this pass, a scene-merging criterion is used to remove weak PSBs in order to reduce over-segmentation. In our second approach, we cluster shots into scenes by transforming this task into a graph-partitioning problem. This is achieved by constructing a weighted undirected graph called a shot similarity graph (SSG), where each node represents a shot and the edges between the shots are weighted by their similarities (color and motion). The SSG is then split into sub-graphs by applying the normalized cut technique for graph partitioning. The partitions obtained represent individual scenes in the video. We further extend the framework to automatically detect the best representative key frames of identified scenes. With this approach, we are able to obtain a compact representation of huge videos in a small number of key frames. We have performed experiments on five Hollywood films (Terminator II, Top Gun, Gone In 60 Seconds, Golden Eye, and A Beautiful Mind) and one TV sitcom (Seinfeld) that demonstrate the effectiveness of our approach. We achieved about 80% recall and 63% precision in our experiments. http://www.cs.ucf.edu/~vision/projects/sceneSeg/sceneSeg.html
|
Page generated in 0.0845 seconds