Spelling suggestions: "subject:"image formatting"" "subject:"lmage formatting""
1 |
Single Complex Image MattingShen, Yufeng Unknown Date
No description available.
|
2 |
Single Complex Image MattingShen, Yufeng 06 1900 (has links)
Single image matting refers to the problem of accurately estimating the foreground object given only one input image. It is a fundamental technique in many image editing applications and has been extensively studied in the literature. Various matting techniques and systems have been proposed and impressive advances have been achieved in efficiently extracting high quality mattes. However, existing matting methods usually perform well for relatively uniform and smooth images only but generate noisy alpha mattes for complex images. The main motivation of this thesis is to develop a new matting approach that can handle complex images. We examine the color sampling and alpha propagation techniques in detail, which are two popular techniques employed by many state-of-the-art matting methods, to understand the reasons why the performance of these methods degrade significantly for complex images. The main contribution of this thesis is the development of two novel matting algorithms that can handle images with complex texture patterns. The first proposed matting method is aimed at complex images with homogeneous texture pattern background. A novel texture synthesis scheme is developed to utilize the known texture information to infer the texture information in the unknown region and thus alleviate the problems introduced by textured background. The second proposed matting algorithm is for complex images with heterogeneous texture patterns. A new foreground and background pixels identification algorithm is used to identify the pure foreground and background pixels in the unknown region and thus effectively handle the challenges of large color variation introduced by complex images. Our experimental results, both qualitative and quantitative, show that the proposed matting methods can effectively handle images with complex background and generate cleaner alpha mattes than existing matting methods.
|
3 |
Enabling Trimap-Free Image Matting via Multitask LearningLI, CHENGQI January 2021 (has links)
Trimap-free natural image matting problem is an important computer vision task in which we extract foreground objects from given images without extra trimap input.
Compared with trimap-based matting algorithms, trimap-free algorithms are easier to make false detection when the foreground object is not well defined. To solve the problem, we design a novel structure (SegMatting) to handle foreground segmentation and alpha matte prediction simultaneously, which is able to produce high-quality mattes based on RGB inputs alone. This entangled structure enables information exchange between the binary segmentation task and the alpha matte prediction task interactively, and we further design a hybrid loss to adaptively balance two tasks during the multitask learning process.
Additionally, we adopt a salient object detection dataset to pretrain our network so that we could obtain a more accurate foreground segment before our training process.
Experiments indicate that the proposed SegMatting qualitatively and quantitatively outperforms most previous trimap-free models with a significant margin, while remains competitive among trimap-based methods. / Thesis / Master of Science in Electrical and Computer Engineering (MSECE)
|
4 |
Interactive Object Selection and Matting for Video and ImagesPrice, Brian L. 10 August 2010 (has links) (PDF)
Video segmentation, the process of selecting an object out of a video sequence, is a fundamentally important process for video editing and special effects. However, it remains an unsolved problem due to many difficulties such as large or rapid motions, motion blur, lighting and shadow changes, complex textures, similar colors in the foreground and background, and many others. While the human vision system relies on multiple visual cues and higher-order understanding of the objects involved in order to perceive the segmentation, current algorithms usually depend on a small amount of information to assist a user in selecting a desired object. This causes current methods to often fail for common cases. Because of this, industry still largely relies on humans to trace the object in each frame, a tedious and expensive process. This dissertation investigates methods of segmenting video by propagating the segmentation from frame to frame using multiple cues to maximize the amount of information gained from each user interaction. New and existing methods are incorporated in propagating as much information as possible to a new frame, leveraging multiple cues such as object colors or mixes of colors, color relationships, temporal and spatial coherence, motion, shape, and identifiable points. The cues are weighted and applied on a local basis depending on the reliability of the cue in each region of the image. The reliability of the cues is learned from any corrections the user makes. In this framework, every action of the user is examined and leveraged in an attempt to provide as much information as possible to guarantee a correct segmentation. Propagating segmentation information from frame to frame using multiple cues and learning from the user interaction allows users to more quickly and accurately extract objects from video while exerting less effort.
|
5 |
Towards Real-time Mixed Reality Matting In Natural ScenesBeato, Nicholas 01 January 2012 (has links)
In Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a dif- ficult problem known as matting. Proper alpha mattes usually come from human guidance, special hardware setups, or color dependent algorithms. This is a consequence of the under-constrained nature of the per pixel alpha blending equation. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural image matting, have made progress finding alpha matte solutions in environments with naturally occurring backgrounds. However, even for the quicker algorithms, the generation of trimaps, indicating regions of known foreground and background pixels, normally requires human interaction or offline computation. This research addresses ways to automatically solve an alpha matte for an image in realtime, and by extension a video, using a consumer level GPU. It does so even in the context of noisy environments that result in less reliable constraints than found in controlled settings. To attack these challenges, we are particularly interested in automatically generating trimaps from depth buffers for dynamic scenes so that algorithms requiring more dense constraints may be used. The resulting computation is parallelizable so that it may run on a GPU and should work for natural images as well as chroma key backgrounds. Extra input may be required, but when this occurs, commodity hardware available in most Mixed Reality setups should be able to provide the input. This allows us to provide real-time alpha mattes for Mixed Reality scenarios that take place in relatively controlled environments. As a consequence, while monochromatic backdrops (such as green screens or retro-reflective material) aid the algorithm’s accuracy, they are not an explicit requirement. iii Finally we explore a sub-image based approach to parallelize an existing hierarchical approach on high resolution imagery. We show that locality can be exploited to significantly reduce the memory and compute requirements of previously necessary when computing alpha mattes of high resolution images. We achieve this using a parallelizable scheme that is both independent of the matting algorithm and image features. Combined, these research topics provide a basis for Mixed Reality scenarios using real-time natural image matting on high definition video sources.
|
6 |
Automation of Closed-Form and Spectral Matting Methods for Intelligent Surveillance ApplicationsAlrabeiah, Muhammad 16 December 2015 (has links)
Machine-driven analysis of visual data is the hard core of intelligent surveillance
systems. Its main goal is to recognize di erent objects in the video sequence and their
behaviour. Such operation is very challenging due to the dynamic nature of the scene
and the lack of semantic-comprehension for visual data in machines. The general
ow
of the recognition process starts with the object extraction task. For so long, this task
has been performed using image segmentation. However, recent years have seen the
emergence of another contender, image matting. As a well-known process, matting
has a very rich literature, most of which is designated to interactive approaches for
applications like movie editing. Thus, it was conventionally not considered for visual
data analysis operations.
Following the new shift toward matting as a means to object extraction, two methods
have stood out for their foreground-extraction accuracy and, more importantly,
their automation potential. These methods are Closed-Form Matting (CFM) and
Spectral Matting (SM). They pose the matting process as either a constrained optimization
problem or a segmentation-like component selection process. This di erence
of formulation stems from an interesting di erence of perspective on the matting process,
opening the door for more automation possibilities. Consequently, both of these
methods have been the subject of some automation attempts that produced some intriguing results.
For their importance and potential, this thesis will provide detailed discussion and
analysis on two of the most successful techniques proposed to automate the CFM and
SM methods. In the beginning, focus will be on introducing the theoretical grounds
of both matting methods as well as the automatic techniques. Then, it will be shifted
toward a full analysis and assessment of the performance and implementation of these
automation attempts. To conclude the thesis, a brief discussion on possible improvements
will be presented, within which a hybrid technique is proposed to combine the
best features of the reviewed two techniques. / Thesis / Master of Applied Science (MASc)
|
7 |
Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed realityXiong, Yiyan 01 January 2014 (has links)
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
|
8 |
Prototypování fotografické kompozice pomocí rozšířené reality / Prototyping of Photographic Composition Using Augmented RealitySalát, Marek January 2016 (has links)
The thesis deals with an image processing problem called image matting. The problem involves detection of a foreground and background in an image with minimal user interaction using trimaps. Foreground detection is used in image composition. The goal of the thesis is to apply already known algorithms, in this case A Global sampling matting, in an Android application. The most important result is an intuitive application that can be used for making creative viral photos. Agile methodology is applied throughout the whole application development cycle. From the very beginning, the application is publicly available as a minimum viable product on Google play. The work’s contribution is in optimization of the mentioned algorithm for use in mobile devices and parallelization on a GPU, together with a publicly available application.
|
Page generated in 0.0822 seconds