• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7585
  • 3242
  • 1872
  • 1085
  • 876
  • 613
  • 234
  • 180
  • 174
  • 174
  • 154
  • 132
  • 127
  • 105
  • 85
  • Tagged with
  • 19721
  • 6516
  • 3209
  • 2546
  • 2176
  • 1984
  • 1847
  • 1797
  • 1754
  • 1373
  • 1365
  • 1339
  • 1257
  • 1209
  • 1178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Scanline calculation of radial influence for image processing

Ilbery, Peter William Mitchell, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified.
142

3D reconstruction of road vehicles based on textural features from a single image

Lam, Wai-leung, William. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
143

Segmentation of medical image volumes using intrinsic shape information

Shiffman, Smadar. January 1900 (has links)
Thesis (Ph.D)--Stanford University, 1999. / Title from pdf t.p. (viewed April 3, 2002). "January 1999." "Adminitrivia V1/Prg/20000907"--Metadata.
144

3D reconstruction and camera calibration from circular-motion image sequences

Li, Yan, January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
145

User aid-based evolutionary computation for optimal parameter setting of image enhancement and segmentation

Darvish, Arman 01 December 2011 (has links)
Applications of imaging and image processing become a part of our daily life and find their crucial way in real-world areas. Accordingly, the corresponding techniques get more and more complicated. Many tasks are recognizable for a image processing chain, such as, filtering, color balancing, enhancement, segmentation, and post processing. Generally speaking, all of the image processing techniques need a control parameter setting. The better these parameters are set the better results can be achieved. Usually, these parameters are real numbers so search space is really large and brute-force searching is impossible or at least very time consuming. Therefore, the optimal setting of the parameters is an essential requirement to obtain desirable results. Obviously, we are faced with an optimization problem, which its complexity depends on the number of the parameters to be optimized and correlation among them. By reviewing the optimization methods, it can be understood that metaheuristic algorithms are the best candidates for these kind of problems. Metaheuristic algorithms are iterative approaches which can search very complex large spaces to come up with an optimal or close to optimal solution(s). They are able to solve black-box global optimization problems which are not solvable by classic mathematical methods. The first part of this thesis optimizes the control parameters for an eye-illusion, image enhancement, and image thresholding tasks by using an interactive evolutionary optimization approach. Eye illusion and image enhancement are subjective human perception-based issues, so, there is no proposed analytical fitness function for them. Their optimization is only possible through interactive methods. The second part is about setting of active contour (snake) parameters. The performance of active contours (snakes) is sensitive to its eight correlated control parameters which makes the parameter setting problem complex to solve. In this work, wehave tried to set the parameters to their optimal values by using a sample segmented image provided by an expert. As our case studies, we have used breast ultrasound, prostate ultrasound, and lung X-ray medical images. The proposed schemes are general enough to be investigated with other optimization methods and also image processing tasks. The achieved experimental results are promising for both directions, namely, interactive-based image processing and sample-based medical image segmentation. / UOIT
146

Content-Adaptive Automatic Image Sharpening

Tajima, Johji, Kobayashi, Tatsuya January 2010 (has links)
No description available.
147

Image analysis techniques for classification of pulmonary disease in cattle

Miller, C. Denise 13 September 2007 (has links)
Histologic analysis of tissue samples is often a critical step in the diagnosis of disease. However, this type of assessment is inherently subjective, and consequently a high degree of variability may occur between results produced by different pathologists. Histologic analysis is also a very time-consuming task for pathologists. Computer-based quantitative analysis of tissue samples shows promise for both reducing the subjectivity of traditional manual tissue assessments, as well as potentially reducing the time required to analyze each sample. <p>The objective of this thesis project was to investigate image processing techniques and to develop software which could be used as a diagnostic aid in pathology assessments of cattle lung tissue samples. The software examines digital images of tissue samples, identifying and highlighting the presence of a set of features that indicate disease, and that can be used to distinguish various pulmonary diseases from one another. The output of the software is a series of segmented images with relevant disease indicators highlighted, and measurements quantifying the occurrence of these features within the tissue samples. Results of the software analysis of a set of 50 cattle lung tissue samples were compared to the detailed manual analysis of these samples by a pathology expert.<p>The combination of image analysis techniques implemented in the thesis software shows potential. Detection of each of the disease indicators is successful to some extent, and in some cases the analysis results are extremely good. There is a large difference in accuracy rates for identification of the set of disease indicators, however, with sensitivity values ranging from a high of 94.8% to a low of 22.6%. This wide variation in result scores is partially due to limitations of the methodology used to determine accuracy.
148

Evaluation of Lung Perfusion Using Pre and Post Contrast-Enhanced CT Images ¡V Pulmonary Embolism

Weng, Ming-hsu 15 July 2005 (has links)
In recent years, computer tomography (CT) has become an increasingly important tool in the clinical diagnosis, mainly because of the advent of fast scanning techniques and high spatial resolution of the vision hardware. In addition to the detailed information of morphology, functional CT also gives the physiologic information, such as perfusion. It can help doctors to make better decision. Our goal in this paper is to evaluate lung perfusion by comparing pre and post contrast-enhanced CT images. After the contrast agent is injected, it flows with blood stream and causes the temporal changes in CT values. Therefore, we can quantize perfusion values from the changes of CT values between pre and post contrast-enhanced CT images. Then guided by color -coded maps, a quantitative analysis for the assessment of lung perfusion can be performed. As a result, it is easier for observer to determinate the lung perfusion distribution. Moreover, we can use color - coded images to visualize pulmonary embolism and monitor therapeutic efficacy.
149

Information Mining of Image Annotation

Lai, Shih-jin 02 July 2006 (has links)
Traditional Content-based image retrieval supports image searches based on color, texture and shape. However it is difficult and nonintuitive for most user to use those low level features to query images. And for most user they like search by keywords . For example , recently Google provide services in image search. Although it is named image search , but actually it is search by keywords ,not image-contents. For this reason MPEG-7 now support textual annotation standard which is MPEG-7 Multimedia Description Schemes (DSs) are metadata structures for describing and annotating audio-visual (AV) content. But manual annotation of image or video take time and expensive. we propose a system which could help us to make suitable auto-annotations.We extract the image factal features and use Diverse Density Algorithm for training models. In this way , user and system can interact in real-time . When trained models in database is growing, the system auto-annotation success rate is increasing.
150

Das neue Bild vom Gottesbild : Bild und Theologie bei Meister Eckhart /

Wilde, Mauritius, January 2000 (has links)
Dissertation--Kath.-Theologische Fakultät--Tübingen--Universität, 1998. / Bibliogr. p. 368-384.

Page generated in 0.0568 seconds