• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Single Complex Image Matting

Shen, Yufeng Unknown Date
No description available.
2

Single Complex Image Matting

Shen, Yufeng 06 1900 (has links)
Single image matting refers to the problem of accurately estimating the foreground object given only one input image. It is a fundamental technique in many image editing applications and has been extensively studied in the literature. Various matting techniques and systems have been proposed and impressive advances have been achieved in efficiently extracting high quality mattes. However, existing matting methods usually perform well for relatively uniform and smooth images only but generate noisy alpha mattes for complex images. The main motivation of this thesis is to develop a new matting approach that can handle complex images. We examine the color sampling and alpha propagation techniques in detail, which are two popular techniques employed by many state-of-the-art matting methods, to understand the reasons why the performance of these methods degrade significantly for complex images. The main contribution of this thesis is the development of two novel matting algorithms that can handle images with complex texture patterns. The first proposed matting method is aimed at complex images with homogeneous texture pattern background. A novel texture synthesis scheme is developed to utilize the known texture information to infer the texture information in the unknown region and thus alleviate the problems introduced by textured background. The second proposed matting algorithm is for complex images with heterogeneous texture patterns. A new foreground and background pixels identification algorithm is used to identify the pure foreground and background pixels in the unknown region and thus effectively handle the challenges of large color variation introduced by complex images. Our experimental results, both qualitative and quantitative, show that the proposed matting methods can effectively handle images with complex background and generate cleaner alpha mattes than existing matting methods.
3

A Joint Dictionary-Based Single-Image Super-Resolution Model

Hu, Jun January 2016 (has links)
Image super-resolution technique mainly aims at restoring high-resolution image with satisfactory novel details. In recent years, leaning-based single-image super-resolution has been developed and proved to produce satisfactory results. With one or some dictionaries trained from a training set, learning-based super-resolution is able to establish a mapping relationship between low-resolution images and their corresponding high-resolution ones. Among all these algorithms, sparsity-based super-resolution has been proved with outstanding performance from extensive experiments. By utilizing compact dictionaries, this class of super-resolution algorithms can be efficient with lower computation complexity and has shown great potential for the practical applications. Our proposed model, which is known as Joint Dictionary-based Super-Resolution (JDSR) algorithm, is a new sparsity-based super-resolution approach. Based on the observation that the initial values of Non-locally Centralized Sparse Representation (NCSR) model will affect the final reconstruction, we change its initial values by using results of Zeyde's model. Besides, with the purpose of further improvement, we also add a gradient histogram preservation term in the sparse model of NCSR, and modify the reference histogram estimation by a simple edge detection based enhancement so that the estimated histogram will be closer to the ground truth. The experimental results illustrate that our method outperforms the state-of-the-art methods in terms of sharper edges, clearer textures and better novel details.
4

VIRTUAL HERITAGE RECONSTRUCTION: THE OLD MAIN CHURCH OF CURITIBA, BRAZIL

KOZAN, JOSE M. 06 April 2004 (has links)
No description available.
5

Performance enhancement of wide-range perception issues for autonomous vehicles

Sharma, Suvash 13 May 2022 (has links) (PDF)
Due to the mission-critical nature of the autonomous driving application, underlying algorithms for scene understanding should be given special care during their development. Mostly, they should be designed with precise consideration of accuracy and run-time. Accuracy should be considered strictly which if compromised leads to faulty interpretation of the environment that may ultimately result in accidental scenarios. On the other hand, run-time holds an important position as the delayed understanding of the scene would hamper the real-time response of the vehicle which again leads to unforeseen accidental cases. These factors come as the functions of several factors such as the design and complexity of the algorithms, nature of the encountered objects or events in the environment, weather-induced effects, etc. In this work, several novel scene understanding algorithms in terms- of semantic segmentation are devised. First, a transfer learning technique is proposed in order to transfer the knowledge from the data-rich domain to a data-scarce off-road driving domain for semantic segmentation such that the learned information is efficiently transferred from one domain to another while reducing run-time and increasing the accuracy. Second, the performance of several segmentation algorithms is assessed under the easy-to-severe rainy condition and two methods for achieving the robustness are proposed. Third, a new method of eradicating the rain from the input images is proposed. Since autonomous vehicles are rich in sensors and each of them has the capability of representing different types of information, it is worth fusing the information from all the possible sensors. Forth, a fusion mechanism with a novel algorithm that facilitates the use of local and non-local attention in a cross-modal scenario with RGB camera images and lidar-based images for road detection using semantic segmentation is executed and validated for different driving scenarios. Fifth, a conceptually new method of off-road driving trail representation, called Traversability, is introduced. To establish the correlation between a vehicle’s capability and the level of difficulty of the driving trail, a new dataset called CaT (CAVS Traversability) is introduced. This dataset is very helpful for future research in several off-road driving applications including military purposes, robotic navigation, etc.
6

Single-Image Super-Resolution via Regularized Extreme Learning Regression for Imagery from Microgrid Polarimeters

Sargent, Garrett Craig 24 May 2017 (has links)
No description available.
7

Generisanje prostora na osnovu perspektivnih slika i primena u oblasti graditeljskog nasleđa / Modeling Based on Perspective Images and Application in Cultural Heritage

Stojaković Vesna 16 August 2011 (has links)
<p>U ovom radu kreiran je novi poluautomatski normativni sistem za generisanje prostora na osnovu perspektivnih slika. Sistem obuhvata niz postupaka čijim korišćenjem se na osnovu dvodimenzionalnih medijuma, najčešće fotografija, generiše trodimenzionalna struktura. Pristup je prilagođen rešavanju složenih problema iz oblasti vizuelizacije graditeljskog nasleđa, što je u radu potkrepljeno praktičnom primenom sistema.</p> / <p> In this research a new semi-automated normative image-based modelling system is created. The system includes number of procedures that are used to transform two-dimensional medium, such as photographs, to threedimensional structure. The used approach is adjusted to the properties of complex projects in the domain of visualization of cultural heritage. An application of the system is given demonstrating its practical value.</p>
8

Single image scene-depth estimation based on self-supervised deep learning : For perception in autonomous heavy duty vehicles

Piven, Yegor January 2021 (has links)
Depth information is a vital component for perception of the 3D structure of vehicle's surroundings in the autonomous scenario. Ubiquity and relatively low cost of camera equipment make image-based depth estimation very attractive compared to employment of the specialised sensors. Classical image-based depth estimation approaches typically rely on multi-view geometry, requiring alignment and calibration between multiple image sources, which is both cumbersome and error-prone. In contrast, single images lack both temporal information and multi-view correspondences. Also, depth information is lost in projection from the 3D world to a 2D image during the image formation process, making single image depth estimation problem ill-posed. In recent years, Deep Learning-based approaches have been widely proposed for single image depth estimation. The problem is typically tackled in a supervised manner, requiring access to image data with pixel-wise depth information. Acquisition of large amounts of such data that is both varied and accurate, is a laborious and costly task. As an alternative, a number of self-supervised approaches exist showing that it is possible to train models performing single image depth estimation using synchronized stereo image-pairs or sequences of monocular images instead of depth labeled data. This thesis investigates the self-supervised approach utilizing sequences of monocular images, by training and evaluating one of the state-of-the-art methods on both the popular public KITTI dataset and the data of the host company, Scania. A number of extensions are implemented for the method of choice, namely addition of weak supervision with velocity data, employment of geometry consistency constraints and incorporation of a self-attention mechanism. Resulting models showed good depth estimation performance for major components of the scene, such as nearby roads and buildings, however struggled at further ranges, and with dynamic objects and thin structures. Minor qualitative and quantitative improvements in performance were observed with introduction of geometry consistency loss and mask, as well as the self-attention mechanism. Qualitative improvements included the models' enhanced ability to identify clearer object boundaries and better distinguish objects from their background. Geometry consistency loss also proved to be informative in low-texture regions of the image and resolved artifacting behaviour that was observed when training models on Scania's data. Incorporation of the supervision of predicted translations using velocity data has proved to be effective at enforcing the metric scale of the depth network's predictions. However, a risk of overfitting to such supervision was observed when training on Scania's data. In order to resolve this issue, velocity-supervised fine-tuning procedure is proposed as an alternative to velocity-supervised training from scratch, resolving the observed overfitting issue while still enabling the model to learn the metric scale. Proposed fine-tuning procedure was effective even when training models on the KITTI dataset, where no overfitting was observed, suggesting its general applicability.
9

Odhad parametrů objektů z obrazů / Estimation of Object Parameters from Images

Přibyl, Bronislav January 2010 (has links)
Rapid expansion of communication technologies in last decade caused increased volume of information which is beeing generated and shared by people and organisations. It is permanently harder to identify relevant content today because of absence of tools and techniques which may support mass information management. As today's media have rather multimedial character image information is even more important. This project describes software for automatic estimation of predefined object parameters from images. A C++ implementation of this algorithm is also described.
10

Designing Compressed Narrative using a Reactive Frame: The Influence of Spatial Relationships and Camera Composition on the Temporal Structure of Story Events

Maynard, Zachary C. 30 August 2012 (has links)
No description available.

Page generated in 0.0291 seconds