• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Video Object Segmentation Method for Mpeg-4

Huang, Jen-Chi 23 September 2004 (has links)
In this thesis, we proposed the series methods of moving object segmentation and object application. These methods are the moving object segmentation method in wavelet domain, double change detection method, global motion estimation method, and the moving object segmentation in the motion background. First, we proposed the Video Object Segmentation Method in Wavelet Domain. We use the Change Detection Method with the different thresholds in four wavelet sub-bands. The experiment results show that we obtain further object shape information and more accurately extracting the moving object. In the double change detection method, we proposed the method for moving object segmentation using three successive frames. We use change detection method twice in wavelet domain. After applying the Intersect Operation, we obtain the accurately moving object edge map and further object shape information. Besides, we proposed the global motion estimation method in motion scene. We propose a novel global motion estimation using cross point for the reconstruction of background scene in video sequences. Due to the robust character and limit number of cross points, we can get the Affine parameters of global motion in video sequences efficiency. At last, we proposed the object segmentation method in motion scene. We use the motion estimation method to estimate the global motion between the consecutive frames. We reconstruct a wide scene background without moving objects by the consecutive frames. At last, the moving objects will be segmented easily by comparing the object frame and the relative part in wide scene background. The Results of our proposed have good performance in the different type of video sequences. Hence, the methods of our thesis contribute to the video coding in Mpeg-4 and multimedia technology.
2

影片動跡剪輯

王智潁, Wang, Chih-Ying Unknown Date (has links)
「動跡剪輯」是將多個不同內容的影片片段,根據影片中特定物體移動的關係,剪接成新的影片,使得產生的新影片能維持動作連貫及流暢的特性。本論文提出一套方法能夠自動找尋不同影片間相似的剪輯點,作為「動跡剪輯」的參考。此方法之重點在於建立影片的時空資訊,作為找尋剪輯點的依據。建立影片時空資訊的過程中,我們先將影片依偵測出的鏡頭轉換點分割成不同的影片片段,再將影片片段中前景物件的位置、大小與動作等資訊分離而成影片物件平面,並結合影片片段中的背景動作資訊與影片物件平面資訊,成為該影片片段之時空資訊,從而進行剪輯點之找尋與比對,擇其最佳點進行剪輯。 運用影片時空資訊於找尋影片間之剪輯點時,是以影片物件平面作為搜尋單位,此方式有助於提升結果的正確性,同時也提供了搜尋時的靈活度。 / With the rapid increasing of the multimedia applications in modern commercial and movie business, it becomes more desirable to have efficient video editing tools. However, conventional video editing requires too many manual interventions that reduce productivities as well as opportunities in better performance. In this thesis, we propose a MOtion-based Video Editing (MOVE) mechanism that can automatically select the most similar or suitable transition points from a given set of raw videos. A given video can be divided into a set of video clips using a shot detection algorithm. For each video clip, we provide an algorithm that can separate the global motions as well as the local motions using the principles of video object plane and accumulated difference. We introduce the concept of spatio-temporal information, a condensed information that associated with a video clip. We can use this information in finding a good video editing point. Since the spatio-temporal information is a concise representation of a video clip, searching in this domain will reduce the complexity of the problem and achieve better performance. We implemented our mechanism with successful experiments.

Page generated in 0.9601 seconds