• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 552
  • 552
  • 552
  • 552
  • 189
  • 128
  • 90
  • 88
  • 84
  • 76
  • 74
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Development of an inexpensive computer vision system for grading oyster meats

Awa, Teck Wah 15 July 2010 (has links)
The objective of this study was to develop an inexpensive automated device for grading raw oyster meats. The automation technique chosen was digital imaging. Typically, a computer vision system contains a microcomputer and a digital camera. An inexpensive digital camera connected to a personal computer was used to measure the projected area of the oyster meats. Physical characteristics of the oyster meats were important in designing a computer vision grading system and the necessary data were not found in the literature. Selected physical characteristics of oyster meats, including the projected area, weight, height, and volume were measured by independent methods. The digital image areas were found to be highly correlated to oyster meat volumes and weights. Currently oysters are marketed on the basis of volume. The results from this study indicated that the relationship between the oyster meat area as measured by computer vision and volume can be used as a grading criterion. The oysters ranged in volume from 3.5 cm³ to 19.4 cm³ A three dimensional image was not required because the height was not important. Tests showed that the system was consistent and successfully graded 5 oysters per second. The system was calibrated, and the prediction equation was validated with an estimated measurement error of ± 3.04 cm³ at a 95% confidence level. The development of automated graders using digital imaging techniques could help improve the quality and consistency of the graded oyster meats. / Master of Science
222

A versatile I/O system for a real time image processor

Adkar, Sanjay 14 November 2012 (has links)
A versatile I/O system for a real time image processor and a complex clocking circuit for the I/O system and the image processor have been designed. The I/O system receives data from an arbitrary video source. These data are digitized and conditioned to be compatible with the image processor. The image processor output is conditioned such that these data can be displayed on a standard RS l7O 2:l video monitor. Variable frame rate reduction. circuits and. bit reduction techniques such as line, column and dot interlace are incorporated during output conditioning. Experiments on reducing the frame rate and bit rate of a processed image can be carried out using this I/O system. / Master of Science
223

Unified approach for the early understanding of images

Jeong, Dong-Seok January 1985 (has links)
In the quest for computer vision, that is the automatic understanding of images, a powerful strategy has been to model the image parametrically. Two prominent kinds of approaches have been those based. on polynomial models and those based on random-field models. This thesis combines these two methodologies, deciding on the proper model by means of a general decision criterion. The unified approach also admits composite polynomial/random-field. models and is applicable to other statistical models as well. This new approach has advantages in many applications, such as image identification and image segmentation. In segmentation, we achieve speed by avoiding iterative pixel-by-pixel calculations. With the general decision criterion as a sophisticated tool, we can deal with images according to a variety of model hypotheses. Our experiments with synthesized images and real images, such as Brodatz textures, illustrate some identification and segmentation uses of the unified approach. / Master of Science / incomplete_metadata
224

Topographic classification of digital image intensity surfaces

Laffey, Thomas Joseph January 1983 (has links)
A complete mathematical treatment is given for describing the topographic primal sketch of the underlying grey tone intensity surface of a digital image. Each picture element is independently classified and assigned a unique descriptive label, invariant under monotonically increasing grey tone transformations, from the set {peak, pit, ridge, ravine, saddle, flat, and hillside}, with hillside having subcategories {inflection point, slope, convex hill, concave hill, and saddle hill}. The topographic classification is based on the first and second directional derivatives of the estimated image intensity surface. Three different sets of basis functions, bicubic polynomial (local facet model), generalized splines, and the discrete cosine basis, are used to estimate the image intensity surface using a least squares technique. Zero-crossings of the first directional derivative are identified as locations of interest in the image. / M.S.
225

Video Categorization Using Semantics and Semiotics

Rasheed, Zeeshan 01 January 2003 (has links) (PDF)
There is a great need to automatically segment, categorize, and annotate video data, and to develop efficient tools for browsing and searching. We believe that the categorization of videos can be achieved by exploring the concepts and meanings of the videos. This task requires bridging the gap between low-level content and high-level concepts (or semantics). Once a relationship is established between the low-level computable features of the video and its semantics, the user would be able to navigate through videos through the use of concepts and ideas (for example, a user could extract only those scenes in an action film that actually contain fights) rat her than sequentially browsing the whole video. However, this relationship must follow the norms of human perception and abide by the rules that are most often followed by the creators (directors) of these videos. These rules are called film grammar in video production literature. Like any natural language, this grammar has several dialects, but it has been acknowledged to be universal. Therefore, the knowledge of film grammar can be exploited effectively for the understanding of films. To interpret an idea using the grammar, we need to first understand the symbols, as in natural languages, and second, understand the rules of combination of these symbols to represent concepts. In order to develop algorithms that exploit this film grammar, it is necessary to relate the symbols of the grammar to computable video features. In this dissertation, we have identified a set of computable features of videos and have developed methods to estimate them. A computable feature of audio-visual data is defined as any statistic of available data that can be automatically extracted using image/signal processing and computer vision techniques. These features are global in nature and are extracted using whole images, therefore, they do not require any object detection, tracking and classification. These features include video shots, shot length, shot motion content, color distribution, key-lighting, and audio energy. We use these features and exploit the knowledge of ubiquitous film grammar to solve three related problems: segmentation and categorization of talk and game shows; classification of movie genres based on the previews; and segmentation and representation of full-length Hollywood movies and sitcoms. We have developed a method for organizing videos of talk and game shows by automatically separating the program segments from the commercials and then classifying each shot as the host's or guest's shot. In our approach, we rely primarily on information contained in shot transitions and utilize the inherent difference in the scene structure (grammar) of commercials and talk shows. A data structure called a shot connectivity graph is constructed, which links shots over time using temporal proximity and color similarity constraints. Analysis of the shot connectivity graph helps us to separate commercials from program segments. This is done by first detecting stories, and then assigning a weight to each story based on its likelihood of being a commercial or a program segment. We further analyze stories to distinguish shots of the hosts from those of the guests. We have performed extensive experiments on eight full-length talk shows (e.g. Larry King Live, Meet the Press, News Night) and game shows (Who Wants To Be A Millionaire), and have obtained excellent classification with 96% recall and 99% precision. http://www.cs.ucf.edu/~vision/projects/LarryKing/LarryKing.html Secondly, we have developed a novel method for genre classification of films using film previews. In our approach, we classify previews into four broad categories: comedies, action, dramas or horror films. Computable video features are combined in a framework with cinematic principles to provide a mapping to these four high-level semantic classes. We have developed two methods for genre classification; (a) a hierarchical method and (b) an unsupervised classification met hod. In the hierarchical method, we first classify movies into action and non-action categories based on the average shot length and motion content in the previews. Next, non-action movies are sub-classified into comedy, horror or drama categories by examining their lighting key. Finally, action movies are ranked on the basis of number of explosions/gunfire events. In the unsupervised method for classifying movies, a mean shift classifier is used to discover the structure of the mapping between the computable features and each film genre. We have conducted extensive experiments on over a hundred film previews and demonstrated that low-level features can be efficiently utilized for movie classification. We achieved about 87% successful classification. http://www.cs.ucf.edu/-vision/projects/movieClassification/movieClmsification.html Finally, we have addressed the problem of detecting scene boundaries in full-length feature movies. We have developed two novel approaches to automatically find scenes in the videos. Our first approach is a two-pass algorithm. In the first pass, shots are clustered by computing backward shot coherence; a shot color similarity measure that detects potential scene boundaries (PSBs) in the videos. In the second pass we compute scene dynamics for each scene as a function of shot length and the motion content in the potential scenes. In this pass, a scene-merging criterion is used to remove weak PSBs in order to reduce over-segmentation. In our second approach, we cluster shots into scenes by transforming this task into a graph-partitioning problem. This is achieved by constructing a weighted undirected graph called a shot similarity graph (SSG), where each node represents a shot and the edges between the shots are weighted by their similarities (color and motion). The SSG is then split into sub-graphs by applying the normalized cut technique for graph partitioning. The partitions obtained represent individual scenes in the video. We further extend the framework to automatically detect the best representative key frames of identified scenes. With this approach, we are able to obtain a compact representation of huge videos in a small number of key frames. We have performed experiments on five Hollywood films (Terminator II, Top Gun, Gone In 60 Seconds, Golden Eye, and A Beautiful Mind) and one TV sitcom (Seinfeld) that demonstrate the effectiveness of our approach. We achieved about 80% recall and 63% precision in our experiments. http://www.cs.ucf.edu/~vision/projects/sceneSeg/sceneSeg.html
226

Effects of grid lattice geometry on digital image filtering

Brown, Roger Owen 09 August 2012 (has links)
The spatial distribution of discrete sample points from an image affect digital image manipulation. The geometries of the grid lattice and edge are described for digital images. Edge detecting digital filters are considered for segmenting an image. A comparison is developed between digital filters for two different digital image grid lattice geometries — 8-neighbor grid lattice (rectangular tesselation) and the 6-neighbor grid lattice (hexagonal tesselation). Digital filters for discrete images are developed that are best approximations to the Laplacian operator applied to continuous two- dimensional mathematical surfaces. Discrepancies between the calculated Laplacian and the digital filtering results are analyzed and a criterion is developed that compares grid lattice effects. The criterion shows that digital filtering in a 6-neighbor grid lattice is preferable to digital filtering in an 8-neighbor grid lattice. / Master of Science
227

Analysis of grey level weighted Hough transforms

Topor, James E. 28 July 2008 (has links)
The Hough transform is a well known method for detecting lines in digital imagery. The results of the transform may be termed accurate if the lines detected correspond to lines occurring in the digital image. The accuracy of the transform is affected by the manner in which the transform is designed. Factors which affect transform accuracy include parameter space resolution, image space quantizing errors, parameter space quantizing errors, and Hough space peak detection strategy. One way to improve the accuracy of the transform is to weight each pixel’s Hough space sinusoid by an amount which reflects the pixel’s grey level contribution to an image space feature. This paper examines the behavior of the transform when a grey level weighting scheme is used. The effects of image space quantizing errors, parameter space quantizing errors, and parameter space resolution on the accuracy of the transform are also investigated. A general Hough space peak detection filter is proposed, and experimental results showing the feasibility of both the grey level weighting scheme and peak detection filter are presented. / Master of Science
228

Determining intrinsic scene characteristics from images

Pong, Ting-Chuen January 1984 (has links)
Three fundamental problems in computer vision are addressed in this dissertation. The first deals with the problem of how to extract and assemble a rich symbolic representation of the gray level intensity changes in an image. Results show that the facet model based feature extraction scheme proposed here is superior to the other existing techniques. The second problem addressed deals with the interpretation of the resulting structures as three-dimensional object surfaces. The three different shape modules described in this dissertation are found to be useful in the recovery of intrinsic scene characteristics. Finally, mechanisms for interaction among different sources of information obtained from different shape modules are studied. It is demonstrated that interactions among shape modules can enhance the data acquired by different means. / Ph. D.
229

An integrated fuzzy rule-based image segmentation framework

Karmakar, Gour Chandra, 1970- January 2002 (has links)
Abstract not available
230

Algorithms for the correction of atmospheric turbulence in images

Rishaad, Abdoola January 2012 (has links)
D. Tech. Engineering Electrical. / Developes and compare algorithms to restore sequences degraded by the effects of atmospheric turbulence with the focus placed on the removal of heat scintillation.Results in the dissertation were obtained using datasets divided into two categories: real datasets and simulated datasets. The real datasets consist of sequences obtained in the presence of real atmospheric turbulence. These datasets were obtained from the CSIR (Council for Scientific and Industrial Research) using their Cyclone camera and vary in range from 5km-15km. The simulated sequences were generated using ground truth images/sequences. Both datasets can be further divided into sequences with real-motion and sequences without real motion.

Page generated in 0.1768 seconds