• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 29
  • 9
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 148
  • 148
  • 65
  • 32
  • 28
  • 27
  • 26
  • 24
  • 22
  • 20
  • 20
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
22

Intelligent Content-Aware Image Resizing System

Lin, Pao-Hung 07 September 2011 (has links)
Along with the advancement of technology, image display devices such as mobile phones, computers and televisions are ubiquitous everywhere in our lives. Due to the different sizes of display devices, digital image scaling technology is often used on the devices while presenting images. For example, when large-size photos are viewed on mobile phones, they tend to present as scaled-down images of the entire pictures, making the main subjects quite small and thus causing inconvenience for viewers. On this account, this study has offered an efficient and high-quality intelligent content-aware image resizing system to solve this problem. This system will first analyze the main area of the image, and then applies an intelligent compression process on the entire image. By doing this, images can still have a complete main subject even after being compressed, not only achieving an excellent visual effect while making the main subject more prominent and obvious, but also reducing the data volume of images. Except for various sizes of display devices, this technology can also be applied on video transmission (H.264/AVC) to effectively reduce the data volume of images, making a substantial contribution to both the image scaling and video coding.
23

Non-Photo-Realistic Illustrations with Artistic Style

Chen, Hsuan-Ming 08 January 2004 (has links)
NPR (Non-Photo-Realistic Rendering) is a new and quick-developed research topic in Image Processing. The main purpose of NPR is to generate pencil sketching¡Bwatercolor and oil painting, something different from photos, automatically by computer algorithms. On the other hand, there is another technique called PR (Photo-Realistic Rendering). The goal of PR is to generate real objects by computer algorithms, such as Matting or Inpainting. Furthermore, NPR includes two modes¡Gone is with physical model, researchers could write programs to simulate NPR by the properties of physical model. Without physical model, researchers could write programs to simulate NPR by their observation and deliberation. This thesis to the latter, there is no physical model in NPR. In the viewpoint of artists, drawing is performance of light and shadow. Then, in scientific, drawing depends on the degree of luminance. Luminance supports artists block and direction when drawing. Moreover, This thesis is mainly simulating oil painting with impressionist.
24

Non-Photo-Realistic Illustrations

Lu, Yi-Mu 09 October 2002 (has links)
NPR (Non-Photo-Realistic Rendering) is a new and quick-developed research topic in Image Processing. The main purpose of NPR is to generate pencil sketching or watercolor, something different from photos, automatically by computer algorithms. On the other hand, there is another technique called PR (Photo-Realistic Rendering). The goal of PR is to generate real objects by computer algorithms, such as Matting. Furthermore, NPR includes two modes: one is with physical model and the other is not. With physical model, researchers could write programs to simulate NPR by the properties of physical model. Without physical model, researchers could write programs to simulate NPR by their observation and deliberation. This thesis belongs to the latter, NPR without physical model. In the viewpoint of artists, drawing is performance of light and shadow. Then, in scientific, drawing depends on the degree of luminance. Luminance supports artists block and direction when drawing. In this thesis, chapter 1 is introduction of art and previous researches. Chapter 2 describes theories what we can use and present the results. Chapter 3 describes the methods what this thesis use and necessary amendment, and present the results.
25

Multi-scale edge-guided image gap restoration

Langari, Bahareh January 2016 (has links)
The focus of this research work is the estimation of gaps (missing blocks) in digital images. To progress the research two main issues were identified as (1) the appropriate domains for image gap restoration and (2) the methodologies for gap interpolation. Multi-scale transforms provide an appropriate framework for gap restoration. The main advantages are transformations into a set of frequency and scales and the ability to progressively reduce the size of the gap to one sample wide at the transform apex. Two types of multi-scale transform were considered for comparative evaluation; 2-dimensional (2D) discrete cosines (DCT) pyramid and 2D discrete wavelets (DWT). For image gap estimation, a family of conventional weighted interpolators and directional edge-guided interpolators are developed and evaluated. Two types of edges were considered; ‘local’ edges or textures and ‘global’ edges such as the boundaries between objects or within/across patterns in the image. For local edge, or texture, modelling a number of methods were explored which aim to reconstruct a set of gradients across the restored gap as those computed from the known neighbourhood. These differential gradients are estimated along the geometrical vertical, horizontal and cross directions for each pixel of the gap. The edge-guided interpolators aim to operate on distinct regions confined within edge lines. For global edge-guided interpolation, two main methods explored are Sobel and Canny detectors. The latter provides improved edge detection. The combination and integration of different multi-scale domains, local edge interpolators, global edge-guided interpolators and iterative estimation of edges provided a variety of configurations that were comparatively explored and evaluated. For evaluation a set of images commonly used in the literature work were employed together with simulated regular and random image gaps at a variety of loss rate. The performance measures used are the peak signal to noise ratio (PSNR) and structure similarity index (SSIM). The results obtained are better than the state of the art reported in the literature.
26

Feature Extraction From Images of Buildings Using Edge Orientation and Length

Danhall, Viktor January 2012 (has links)
To extract information from a scene captured in digital images where the information represents some kind of feature is an important process in image analysis. Both the speed and the accuracy for this process is very important since many of the analysis applications either require analysis of very large data sets or requires the data to be extracted in real time. Some of those applications could be 2 dimensional as well as 3 dimensional object recognition or motion detection. What this work will focus on is the extraction of salient features from scenes of buildings, using a joint histogram based both the edge orientation and the edge length to aid in the extraction of the relevant features. The results are promising but will need some more refinement work to be used successfully and is therefore quite a bit of reflected theory.
27

Quality Inspection of Screw Heads Using Memristor Neural Networks

Liu, Xiaojie 01 December 2019 (has links)
Quality inspection is an indispensable part of the production process of screws for hardware manufactories. In general, hardware manufactories do the quality test of screws by using an electric screwdriver to twist screws. However, there are some limitations and shortcomings in the manual inspection. Firstly, the efficiency of manual inspection is low. Second, manual inspection is difficult to achieve continuous working for 24 hours, which will make a high wage cost. In this thesis, in order to enhance the inspection efficiency and save test costs, we propose to use the image recognition technology of memristor neural networks to check the quality of screws. Here, we discuss different training models of neural networks, namely: convolutional neural networks, one-layer memristor neural network with fixed learning rates. By using the dataset of 8,202 screw head images, experimental results show that the classification accuracy of CNNs and memristor neural networks can achieve 96% and 90%, respectively, which prove the effectiveness of the proposed method.
28

Investigation of High-Pass Filtering for Edge Detection in Optical Scanning Holography

Zaman, Zayeem Habib 16 October 2023 (has links)
High-pass filtering has been shown to be a promising method for edge detection in optical scanning holography. By using a circular function as a pupil for the system, the radius of the circle can be varied to block out different ranges of frequencies. Implementing this system in simulation yields an interesting result, however. As the radius increases, a singular edge can split off into two edges instead. To understand the specific conditions under which this split occurs, Airy pattern filtering and single-sided filtering were implemented to analyze the results from the original high-pass simulation. These methods were tested with different input objects to assess any common patterns. Ultimately, no definitive answer was found, as Airy pattern filtering resulted in inconsistent results across different input objects, and single-sided filtering does not completely isolate the edge. Nonetheless, the documented results may aid a future understanding of this phenomenon. / Master of Science / Holograms are three-dimensional recordings of an object, reminiscent of how a photograph records a two-dimensional image of an object. Detecting edges in images and the reconstructed images from holograms can help us identify objects within the recorded image or hologram. In computer vision, common edge detection techniques involve analyzing the image's spatial frequency, or changes in relative intensity over space. One such technique is high-pass filtering, in which lower spatial frequencies are blocked out. High-pass filtering can also be applied to holographic imaging systems. However, when applying high-pass filtering to a holographic system, detected edges can split into two as higher frequencies are filtered out. This thesis examines the conditions for why this split-edge phenomenon occurs by modifying the original recorded object and the filtering mechanism, then analyzing the resultant holograms. While the results did not give a conclusive answer, they have been documented for the purpose of further research.
29

Image Segmentation Using Deep Learning

Akbari, Nasrin 27 September 2022 (has links)
The image segmentation task divides an image into regions of similar pixels based on brightness, color, and texture, in which every pixel in the image is as- signed to a label. Segmentation is vital in numerous medical imaging applications, such as quantifying the size of tissues, the localization of diseases, treatment plan- ning, and surgery guidance. This thesis focuses on two medical image segmentation tasks: retinal vessel segmentation in fundus images and brain segmentation in 3D MRI images. Finally, we introduce LEON, a lightweight neural network for edge detection. The first part of this thesis proposes a lightweight neural network for retinal blood vessel segmentation. Our model achieves cutting-edge outcomes with fewer parameters. We obtained the most outstanding performance results on CHASEDB1 and DRIVE datasets with an F1 measure of 0.8351 and 0.8242, respectively. Our model has few parameters (0.34 million) compared to other networks such as ladder net with 1.5 million parameters and DCU-net with 1 million parameters. The second part of this thesis investigates the association between whole and re- gional volumetric alterations with increasing age in a large group of healthy subjects (n=6739, age range: 30–80). We used a deep learning model for brain segmentation for volumetric analysis to extract quantified whole and regional brain volumes in 95 classes. Segmentation methods are called edge or boundary-based methods based on finding abrupt changes and discontinuities in the intensity value. The third part of the thesis introduces a new Lightweight Edge Detection Network (LEON). The proposed approach is designed to integrate the advantages of the deformable unit and DepthWise Separable convolutions architecture to create a lightweight back- bone employed for efficient feature extraction. Our experiments on BSDS500 and NYUDv2 show that LEON, while requiring only 500000 parameters, outperforms the current lightweight edge detectors without using pre-trained weights. / Graduate / 2022-10-12
30

Object detection algorithms analysis and implementation for augmented reality system / Objecktų aptikimo algoritmai, jų analizė ir pritaikymas papildytosios realybės sistemoje

Zavistanavičiūtė, Rasa 05 November 2013 (has links)
Object detection is the initial step in any image analysis procedure and is essential for the performance of object recognition and augmented reality systems. Research concerning the detection of edges and blobs is particularly rich and many algorithms or methods have been proposed in the literature. This master‟s thesis presents 4 most common blob and edge detectors, proposes method for detected numbers separation and describes the experimental setup and results of object detection and detected numbers separation performance. Finally, we determine which detector demonstrates the best results for mobile augmented reality system. / Objektų aptikimas yra pagrindinis žingsnis vaizdų analizės procese ir yra pagrindinis veiksnys apibrėžiantis našumą objektų atpažinimo ir papildytosios realybės sistemose. Literatūroje gausu metodų ir algoritmų aprašančių sričių ir ribų aptikimą. Šiame magistro laipsnio darbe aprašomi 4 dažniausiai naudojami sričių ir ribų aptikimo algoritmai, pasiūlomas metodas aptiktų skaičių atskyrimo problemai išspręsti. Pateikiami atliktų eksperimentų rezultatai, palyginmas šių algoritmų našumas. Galiausiai yra nustatoma, kuris iš jų yra geriausias.

Page generated in 0.1025 seconds