• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

融合肖像漫畫之立體誇張肖像模型產生系統 / A 3D Caricature System by Fusing Caricature Images

陳又綸, Chen, Yu-Lun Unknown Date (has links)
在電腦硬體快速發展的時代,以前被視為只能在工作站上執行的3D繪圖現今也可以輕易地在普通的家用電腦上執行。藉著網路視訊攝影機低廉的價格以及其普及化的趨勢,我們設計一套兼顧快速以及經濟性的系統。使用者可以輕易自行架構攝影環境,不需專業的操作便可以自動產生立體的人臉誇張模型。另一方面我們改良並結合先前的2D誇張肖像漫畫研究,除了大幅提升五官定位的精確度外,藉著替換多張參考畫作,產生出來的模型可以呈現不同藝術風格的外觀,以供廣泛的娛樂用途。 / With the advances of hardware nowadays, the computation and demonstration of 3D graphics are readily attained on a personal computer. At the same time, thanks to the prevalence of instant messaging software, webcam has become very popular. These combined factors have motivated us to design an economic and effective solution for general users to create their own 3D caricature models without complex procedures or instructions. In this thesis, we have developed a system which can extract and analyze facial features from an image pair captured by two webcams, and then generate a 3D face model based on it. Specifically, we improved previous research to obtain more accurate feature locations, and extended the exaggeration algorithm to generate more impressive effects. By fusing a caricature produced with different work as references, the system is capable of painting the face model with various artists’ styles.
2

由地面光達資料自動重建建物模型之研究 / Automatic Generation of Building Model from Ground-Based LIDAR Data

詹凱軒, Kai-Hsuan,Chan Unknown Date (has links)
地面光達系統可以快速獲取大量且高精度之點雲資料,這些點雲資料不但記錄了被掃描物體之三維資訊,還包含其色彩訊息。但因光達點雲資料量過於龐大,若要直接於電腦上展示其三維模型,必須配合有效的資料處理技術,才能迅速且即時地將資料顯示於螢幕上。 我們針對地面光達系統獲取之建物點雲,提出一套處理方法,期盼透過少數關鍵點雲,就足以表示整個建物的模型。研究流程主要分為三階段,首先採用三維網格資料結構,從地面光達系統獲取之建物點雲中,萃取出關鍵點雲,並利用三維不規則三角網建模方式,進行模型建構工作,產生建物大略模型。其次再逐點判斷是否將剩餘之點加入此模型中,持續更新模型細微之部分。最後將點雲中的色彩資訊轉成影像,敷貼在模型表面上,讓整個模型更為逼真。 我們以政大綜合大樓進行實驗,成功地減少大量冗餘的點雲資料,只需要約原始點雲的1%,就足以將綜合大樓模型建構完成。為了達到可以從不同視角即時瀏覽建物模型,我們採用虛擬實境語言(VRML)來描述處理後的三維模型,遠端使用者只需透過一般網頁瀏覽器,即可即時顯示處理過的三維建物模型。 / Ground-based LIDAR system can be used to detect the surface of the buildings on the earth. In general, it produces large amount of high-precision point cloud data. These data include not only the three-dimensional space information, but also the color information. However, the number of point cloud data is huge and is difficult to be displayed efficiently. It’s necessary to use efficient data processing techniques in order to display these point cloud data in real-time. In this research, we construct the three-dimensional building model using the key points selected from a given set of point cloud data. The major works of our scheme consists of three parts. In the first part, we extract the key points from the given point cloud data through the help of a three-dimensional grid. These key points are used to construct a primitive model of the building. Then, we checked all the remaining points and decided whether these points are essential to the final building model. Finally, we transformed the color information into images and then used the transformed images to represent generic surface material of the three-dimensional model of the building. The goal of the final step is to make the model more realistic. In the experiments, we used the twin-tower of our university as our target. We successfully reduced the required data in displaying the building model and only about one percent of the original point cloud data are used in the final model. Hence, one can see the twin-tower from various view points in real-time. In addition, we use VRML to describe our model and the users can browse the results in real-time on internet.

Page generated in 0.0137 seconds