• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 64
  • 8
  • Tagged with
  • 72
  • 72
  • 32
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

以影像為基礎之智慧型睡眠監測系統 / Intelligent video-based sleep monitoring system

郭仁和, Kuo, Jen Ho Unknown Date (has links)
我們提出的智慧型睡眠監測系統,是基於影像分析技術進行睡眠品質觀測,並利用所得到的數據來推斷最佳的喚醒時間。此系統命名為iWakeUp,利用非接觸式的方法來收集影像資料並進行後續處理,此裝置將被安裝在一般的臥室來幫助睡眠者,以期成為增進智慧家庭生活品質的一環。在此論文中,我們將會描述iWakeUp的各個模組包括測定動作量、推斷睡眠階段乃至於如何建立喚醒機制。更特別的是,我們考慮了喚醒時間與喚醒機制的關係,於較早的時間喚醒必須具有更高的信心度,否則將付出較大的代價,反之亦然。另外為了處理晨間臥室中的光影變化,不同的背景模型也已被整合測試,以期讓系統可以提升長時間觀測的準確度。最後,我們也進行了使用iWakeUp的臨床實驗,結果指出使用iWakeUp喚醒的睡眠者具有較低的嗜睡感與更好的活力。 / We present a video-based monitoring system to determine the sleep status and optimal wakeup time in this thesis. We envision a smart living space in which a data collection and processing module named iWakeUp is installed in the bedroom to record and monitor sleep in a non-invasive manner. We describe the overall structure of the iWakeUp system, including the procedure to measure amount of motion, the method for inferring wake/sleep status from the acquired video and the logics for deciding the optimal wakeup time. In particular, a time-dependent decision rule has been incorporated to account for unequal penalties when classification error occurs. Furthermore, various background modeling techniques have been examined to address lighting changes at dawn in the bedroom for long-term monitoring. Validation experiments are carried out to compare the alertness level upon awakening with/without reported a lower level of sleepiness and higher level of vigorousness in comparison to the control group.
52

突圍 – 軟體代理商的競爭模式 / Software distributor competition strategy

洪志輝 Unknown Date (has links)
電腦軟體這個伴隨著電腦科技,成為今後人類最具影響力的產品,隨著網際網路的發展已經進入了一個新的世代。在網際網路普及之前,軟體的發展除了要靠軟體開發廠商的創造力,最重要的就是代理商的傳播力、行銷力、及銷售力,把這樣無形性、智慧性的商品,提供給所需要的顧客。網際網路普及後則發生了本質性的轉變,尤其是數位影像及多媒體這類比較偏向一般使用者的套裝軟體。 本文將以數位影像及多媒體的代理商,在台灣的通路發展為主軸,探討這樣通路商所面臨的問題及策略決策模式。其中我們將以該產業最著名代理商為個案研究標的,以該個案為主軸探討此類通路商的如何應用五力分析在市場處於發展階段,用以發展自身的競爭優勢及其策略,在市場面臨成熟時如何應用技術採用生命週期的觀點,面對網際網路普及其對通路的衝擊,並提出可行的對應策略及建議。 / Computer softwear had been the most powerful and imfluenced tool to humannity for the then and now on, which drived by Internet and hardware popularity. Before the Internet age, software’s diffusion would depend on the orginial developer’s creativity and local Distributor’s marketing, sales force to deliver this kind of intelligent and intangible products to the customer. However as the Internet age comes, some of these connection had been changed. Image and Multi-media products could be on of the the most dramatice impact groups. In this thesis we would focus on Image and Multi-media software Distributor’s. With the most famous Distributor’s case study, we could further understand how they to build core competition and long term strategy to compet the competition and fulfill the consumer and orginal developer’s needs. Especially after the Internet age, what’s the key strategy make it survive and stronger. In this case we would learn how to use Five Fore Analysis to build core competetion in the market growing age, and how to fit in Technology Adoption LifeCycle theory in the realy world and wha’s the solution for the coming age.
53

壽險業新契約作業的演進與未來發展 / The evolution & future development in processing the new business of life insurance

吳雲嬌 Unknown Date (has links)
隨著資訊科技的日新月異與網際網路的普及應用,各企業在面臨全球化的競爭環境下,皆積極尋求應用先進的資訊技術,力求創新與躍進,以期用最有限的資源創造企業最大的效益。對以客戶服務為主的壽險業而言,隨著新契約快速成長、產品多樣化、行銷管道多元化,壽險業所面臨的挑戰與競爭亦隨之加劇,因此,保險公司除了業務推廣外,更須加快求新求變的腳步,提供更創新、更超乎預期的客戶服務,還要兼顧營運成本管控,才能在瞬息萬變的金融保險市楊,持續保有競爭優勢。 本研究主要探討個案公司的新契約作業,在面臨業務量急速成長又要兼顧成本/利潤而無法同步大幅增加人力下,如何突破作業瓶頸,以及面臨每一階段的困難與挑戰時,如何事先評估風險與因應措施,並運用科技技術及專業團隊來大幅提升服務品質與作業效率。並且探討個案公司在完成每個階段的變革後,如何分析專案的執行成效,又如何持續提出還可再提昇改善的事項、問題以及未來可能的發展方向。個案公司的新契約作業演進過程摘要如下: 1.由於早期的人工作業已難以負荷大幅成長的業務量,故個案公司於1992年建置了「自動化核保系統」(Underwriting Automation)。然在專案推動過程中,面臨了〝如何建置完善的自動化系統〞、〝如何在兼顧成本與流程順暢下,決定最佳的系統建置方式〞及〝如何讓人員接受作業改造〞的問題, 透過專案小組以使用者的角度規劃系統流程,並且不斷地與相關人員進行充份溝通與宣導後,終於獲得所有人員的認同與支持。透過自動核保系統,不僅解決因業務量急速成長所面臨的作業瓶頸,也大幅提昇核保效率、降低人為核保錯誤率並且節省人力成本。 2.Underwriting Automation系統雖已提昇核保/發單效率,但仍面臨因產品的多元化以致業務員反應記不了這麼多的投保規則、以及無法在與客戶洽談保險當時即提供保戶確定的核保結果,因此,個案公司於2001年建置了「線上快速投保系統」(e-Application)。然在專案推動過程中,須克服的問題是〝如何提昇業務員使用系統的意願〞,透過專案小組不斷地與業務員進行溝通、並不斷地修正系統與持續地教育訓練及推廣,終於將e-Application系統使用率提昇到98%。透過e-Application系統,業務員不再有投保規則複雜的困擾,且不論上班/下班時間或例假日,都可隨時經由網際網路(internet)在客戶所在之處完成e受理、e核保,提供保戶即時的保障承諾,大幅提昇業務員的保險專業形象及行銷便利性。 3.e-Application系統雖可立即獲得核保結果,惟業務員仍須將要保文件寄達分公司才能處理後續作業。再加上投資型商品熱賣且作業較傳統型商品複雜,致核保人員的作業負荷增加。為了大幅提昇作業效率且運用有限的人力資源發揮最大的效益,個案公司於2006年建置了「影像線上作業系統」(Image & Workflow)。然在專案推動過程中,面臨了〝如何改變核保人員的作業習慣 〞、〝如何將分公司人力順利移轉至簡易作業中心〞等問題,專案小組透過不斷地溝通及訓練,協助核保人員適應全程線上作業的變革;並提早一年與分公司溝通及規劃人力移轉事宜,讓人力及作業能夠順利移轉。透過Image & Workflow系統,其快速便捷的e化流程,5秒鐘即可傳遞要保文件影像,不僅有效改善新契約受理高峰量之人力及作業負荷問題,且簡易案件的分流已大幅提昇核保效率且降低行政作業成本,並使核保員可更專注於複雜案件的處理與溝通,提供保戶及業務員更優質的核保服務。 保險是永續經營的服務事業,因此,流程變革是保險公司必須與時俱進且持續研討的重要課題。本研究藉由個案公司流程變革的過程、經驗及成效的分享,建議小型壽險公司推動核保自動化/影像化、中大型壽險公司全面e化/影像化/無紙化,並建議個案公司在邁入e化、影像化、無紙化的流程後,針對仍須仰賴人工處理的輸入作業,以及體檢核保人員養成不易的問題,可再進一步研議如何運用更精進的文字辨識技術與醫務專家系統,同時結合相關產業資源,採分階段方式逐步建置更科技化的系统平台,讓新契約作業邁向更快速、更專業的服務新里程。 / Along with the time evolution, the application of high technology and the spread of Internet are popular. The companies, currently are facing the competition environment in the era of globalization, all are looking for the modern technology aggressively to improve their service / operation in creative innovation way and to achieve the maximum benefit with limited resources. For the insurance companies, whose main focus is to provide customer service, they are facing more and more dramatic challenges and strict competition as the rapid growth of new business, variety of products, and diversity of promotion methods. Therefore, to maintain competitive advantage in the fast-changing financial-insurance market, the insurance companies need to strive for accelerated changes newly (as called re-engineering) not only to provide the innovative service which is beyond customer’s expectation, but also to manage the operation cost, besides promoting new business. This thesis is a case study of the new business process of the insurance company. It showed us how to break the process bottleneck, while the company faced the difficult situation that the business volume grown rapidly but no enough manpower can be added considered the cost/benefit justification. In addition, while the insurance company faced the difficulties and challenges of processing new business in each phase, how to evaluate the risks in advance and the corresponding countermeasures, adopting technology and professional team work to enhance the service quality and the operation efficiency. In these topics, we also discussed about how to evaluate the outcomes of re-engineering in each phase, included the way of reviewing the achievements / benefits of projects, continuously identifying the issues that can be improved, and the future development direction. The summary of the new business process evolution of the insurance company as below, 1.Considered the previous manual process barely handled the loading along with the huge growth of business volume, the insurance company implemented the Underwriting Automation system in 1992 to release the work load. However, the company faced several problems as “how to implement a comprehensive automation system”, “how to decide the best implementation way to achieve the objectives of that the cost is justified and the flow is smooth”, and “how to let the user to accept the process re-engineering”. Through planed the system workflow from user’s point of view and keep fully communicating with the stakeholders by the project team, this project gained all the stakeholders’ recognition and support eventually. By using the Underwriting Automation system, not only the process bottleneck caused by the huge growth of business volume can be solved, but also can large improved the underwriting efficiency, decreased the manual underwriting error rate, and saved the manpower cost. 2.Even though the Underwriting Automation system had improved the efficiency of underwriting and policy issuing, the agent still complained that they cannot remember so many underwriting rules and unable to provide the confirmed underwriting result timely when they sell the insurance product in front of the customer. Therefore, the company implemented e-Application system in 2001. However, the project team had to overcome the problem as “how to promote agent’s usage on this system” during the project implementation period. Through keep fully communicating with the agent, continuously enhanced the system features, training and promotion by the project team, the e-Application system usage rate finally raised up to 98%. By using the e-Application system, not only the agent’s persecution caused by the complex underwriting rule can be solved, but also can promise the protection to the customer in time as soon as the agent completed the e-submit, e-underwriting on customer site through the internet anytime. The professionalism and convenience of sales were big improved. 3.Even though e-Application can reply with the underwriting result immediately, the agent still need to mail out the signed copy of the application documents back to branch office for further process. Furthermore, the hot sale of Investment Link Product (ILP) and more complicated process of ILP product than the traditional product that caused the underwriter work load increased. In order to improve the operation efficiency and elaborate the best effectiveness under the limited human resource, the insurance company implemented the Image & Workflow system in 2006. However, the company faced several problems as “how to change the usual practice of underwriter”, “how to smoothly transfer the manpower from branch to the simple processing center” during the project implementation period. The project team was not only keep training and fully communicating with the underwriter, helped them accommodate to the change to entire on-line process, but also well planed the transformation and communicated with the branches one year ahead to smooth out the transfer of process and manpower. By using the Image & Workflow system, the fast convenient auto-flow that enable the document image be delivered in five second, not only can effectively solve the work load problem caused by the new business peak volume., but also can separate the simple case process to improve the underwriting efficiency and lower the operation cost. So the underwriter can more focus on the complicated case handling and communication to provide the best underwriting service quality to the agent and customer. Since insurance is the business of providing sustainable service, the process re-engineering is important task to the insurance company and has to be examined and modified concurrently with the times. In this thesis, with sharing with the experience and the achievement of the insurance company’s re-engineering process, it is suggested to implement the underwriting automation and imagelization within the small scale of insurance company, and fully implemented the electronic, imagelizing and paperless process within the medium or large scale of insurance company. After the insurance company had done the process re-engineering in electronic, imagelizing and paperless ways, it is also suggested that for those key-in works still rely on the manual process and the difficulties of developing and training medical underwriting personnel, the company can further study how to apply more advanced handwriting identification (OCR) technology and medical specialist system to solve the problems. Through utilizing the resource from the related industries to implement the highly technological system platform by phases, so as to the new business process toward more rapid and professional service milestone.
54

粒子群最佳化演算法於估測基礎矩陣之應用 / Particle swarm optimization algorithms for fundamental matrix estimation

劉恭良, Liu, Kung Liang Unknown Date (has links)
基礎矩陣在影像處理是非常重要的參數,舉凡不同影像間對應點之計算、座標系統轉換、乃至重建物體三維模型等問題,都有賴於基礎矩陣之精確與否。本論文中,我們提出一個機制,透過粒子群最佳化的觀念來求取基礎矩陣,我們的方法不但能提高基礎矩陣的精確度,同時能降低計算成本。 我們從多視角影像出發,以SIFT取得大量對應點資料後,從中選取8點進行粒子群最佳化。取樣時,我們透過分群與隨機挑選以避免選取共平面之點。然後利用最小平方中值表來估算初始評估值,並遵循粒子群最佳化演算法,以最小疊代次數為收斂準則,計算出最佳之基礎矩陣。 實作中我們以不同的物體模型為標的,以粒子群最佳化與最小平方中值法兩者結果比較。實驗結果顯示,疊代次數相同的實驗,粒子群最佳化演算法估測基礎矩陣所需的時間,約為最小平方中值法來估測所需時間的八分之一,同時粒子群最佳化演算法估測出來的基礎矩陣之平均誤差值也優於最小平方中值法所估測出來的結果。 / Fundamental matrix is a very important parameter in image processing. In corresponding point determination, coordinate system conversion, as well as three-dimensional model reconstruction, etc., fundamental matrix always plays an important role. Hence, obtaining an accurate fundamental matrix becomes one of the most important issues in image processing. In this paper, we present a mechanism that uses the concept of Particle Swarm Optimization (PSO) to find fundamental matrix. Our approach not only can improve the accuracy of the fundamental matrix but also can reduce computation costs. After using Scale-Invariant Feature Transform (SIFT) to get a large number of corresponding points from the multi-view images, we choose a set of eight corresponding points, based on the image resolutions, grouping principles, together with random sampling, as our initial starting points for PSO. Least Median of Squares (LMedS) is used in estimating the initial fitness value as well as the minimal number of iterations in PSO. The fundamental matrix can then be computed using the PSO algorithm. We use different objects to illustrate our mechanism and compare the results obtained by using PSO and using LMedS. The experimental results show that, if we use the same number of iterations in the experiments, the fundamental matrix computed by the PSO method have better estimated average error than that computed by the LMedS method. Also, the PSO method takes about one-eighth of the time required for the LMedS method in these computations.
55

基於多視角幾何萃取精確影像對應之研究 / Accurate image matching based on multiple view geometry

謝明龍, Hsieh, Ming Lung Unknown Date (has links)
近年來諸多學者專家致力於從多視角影像獲取精確的點雲資訊,並藉由點雲資訊進行三維模型重建等研究,然而透過多視角影像求取三維資訊的精確度仍然有待提升,其中萃取影像對應與重建三維資訊方法,是多視角影像重建三維資訊的關鍵核心,決定點雲資訊的形成方式與成效。 本論文中,我們提出了一套新的方法,由多視角影像之間的幾何關係出發,萃取多視角影像對應與重建三維點,可以有效地改善對應點與三維點的精確度。首先,在萃取多視角影像對應的部份,我們以相互支持轉換、動態高斯濾波法與綜合性相似度評估函數,改善補綴面為基礎的比對方法,提高相似度測量值的辨識力與可信度,可從多視角影像中獲得精確的對應點。其次,在重建三維點的部份,我們使用K均值分群演算法與線性內插法發掘潛在的三維點,讓求出的三維點更貼近三維空間真實物體表面,能在多視角影像中獲得更精確的三維點。 實驗結果顯示,採用本研究所提出的方法進行改善後,在對應點精確度的提升上有很好的成效,所獲得的點雲資訊存在數萬個精確的三維點,而且僅有少數的離群點。 / Recently, many researchers pay attentions in obtaining accurate point cloud data from multi-view images and use these data in 3D model reconstruction. However, this accuracy still needs to be improved. Among these researches, the methods in extracting the corresponding points as well as computing the 3D point information are the most critical ones. These methods practically affect the final results of the point cloud data and the 3D models so constructed. In this thesis, we propose new approaches, based on multi-view geometry, to improve the accuracy of corresponding points and 3D points. Mutual support transformation, dynamic Gaussian filtering, and similarity evaluation function were used to improve the patch-based matching methods in multi-view image correspondence. Using these mechanisms, the discrimination ability and reliability of the similarity function and, hence, the accuracy of the extracted corresponding points can be greatly improved. We also used K-mean algorithms and linear interpolations to find the better 3D point candidates. The 3D point so computed will be much closer to the surface of the actual 3D object. Thus, this mechanism will produce highly accurate 3D points. Experimental results show that our mechanism can improve the accuracy of corresponding points as well as the 3D point cloud data. We successfully generated accurate point cloud data that contains tens of thousands 3D points, and, moreover, only has a few outliers.
56

三焦張量在多視角幾何中的計算與應用 / Computation and Applications of Trifocal Tensor in Multiple View Geometry

李紹暐, Li, Shau Wei Unknown Date (has links)
電腦視覺三維建模的精確度,仰賴影像中對應點的準確性。以前的研究大多採取兩張影像,透過極線轉換(epipolar transfer)取得影像間基礎矩陣(fundamental matrix)的關係,然後進行比對或過濾不良的對應點以求取精確的對應點。然極線轉換存在退化的問題,如何避免此退化問題以及降低兩張影像之間轉換錯誤的累積,成為求取精確三維建模中極待解決的課題。 本論文中,我們提出一套機制,透過三焦張量(trifocal tensor)的觀念來過濾影像間不良的對應點,提高整體對應點的準確度,從而能計算較精確的投影矩陣進行三維建模。我們由多視角影像出發,先透過Bundler求取對應點,然後採用三焦張量過濾Bundler產生的對應點,並輔以最小中值平方法(LMedS)提升選點之準確率,再透過權重以及重複過濾等機制來調節並過濾對應點,從而取得精確度較高的對應點組合,最後求取投影矩陣進行電腦視覺中的各項應用。 實作中,我們測詴了三組資料,包含一組以3ds Max自行建置的資料與兩組網路中取得的資料。我們先從三張影像驗證三焦張量的幾何特性與其過濾對應點的可行性,再將此方法延伸至多張影像,同樣也能證實透過三焦張量確實能提升對應點的準確度,甚至可以過濾出輸入資料中較不符合彼此間幾何性的影像。 / The accuracy of 3D model constructions in computer vision depends on the accuracy of the corresponding points extracted from the images. Previous studies in this area mostly use two images and compute the fundamental matrix through the use of the epipolar geometry and then proceed for corresponding point matching and filtering out the outliers in order to get accurate corresponding points. However, the epipoler transform suffers from the degenerate problems and, also, the accumulated conversion errors during the corresponding matches both will degrade the model accuracy. Solving these problems become crucial in reconstructing accurate 3D models from multiple images. In this thesis, we proposed a mechanism to obtain accurate corresponding points for 3D model reconstruction from multiple images. The concept of trifocal tensor is used to remove the outliers in order to improve the overall accuracy of the corresponding points. We first use Bundler to search the corresponding points in the feature points extracted from multiple view images. Then we use trifocal tensor to determine and remove the outliers in the corresponding points generated by Bundler. LMedS is used in these processes to improve the accuracy of the selected points. One can also improve the accuracy of the corresponding points through the use of weighting function as well as repeated filtering mechanism. With these high precision corresponding points, we can compute more accurate fundamental matrix in order to reconstruct the 3D models and other applications in computer vision. We have tested three sets of data, one of that is self-constructed data using the 3ds Max and the other two are downloaded from the internet. We started by demonstrating the geometric properties of trifocal tensor associated with three images and showed that it can be used to filter out the bad corresponding points. Then, we successfully extended this mechanism to more images and successfully improved the accuracy of the corresponding points among these images.
57

底片交換遊戲所展示的攝影趣味— 以交換重曝新影像敘事為例 / The playfulness of photography in filmswap: exploring a new image narrative

錢怡安, Chien, Yi An Unknown Date (has links)
交換重曝是一種兩人交換已拍攝底片的活動。透過交換,底片經由重複曝光使同一張底片出現兩個人所攝之重疊影像。交換重曝這種新近的網路及人際互動現象,說明了傳統的攝影方式與底片使用在攝影數位化的浪潮中並未消逝。 近來底片攝影與玩具相機的興起,使研究者觀察到現今的交換重曝攝影蘊含著情感與趣味的分享。從文獻探討過程發現,參與交換重曝的動機與目的來自於遊戲的樂趣,且交換重曝是一種特殊的人際互動過程,交換過程中因缺乏交換雙方的溝通與交流,竟產生人際傳播間類同「噪音」現象所致的「影像醬」。 本研究經由問卷調查與深度訪談,從參與交換重曝活動者所發佈分享影像的情形,了解交換重曝一般行為。其次,分析沖洗完成的底片所帶來的出乎意料的影像驚喜元素,以探察交換重曝活動在攝影數位化時代中新的影像敘事方法。 本研究結果歸納如下:交換重曝結合了遊戲與生活,開創新的人際交流與互動。交換重曝的影像由出乎意料的內涵與驚喜元素,描繪影像的故事場景,帶來的窺視感內含多種可能,建立陌生文化的交集並型塑影像自我敘事。 / “Filmswap” is an activity that two people take photos and exchange their own films and then take photos again. By exchanging films, two different images will overlap and simultaneously be developed at one film through double exposure. The arising of “Filmswap” as a new way of communication reflects that analog photography has not disappeared in digital trends. Recently, analog photography and toy cameras have become popular. One important fact needs to be discovered that analog photography is related to emotion and fun sharing. As revealed in literature review, the motivation and purpose of Filmswap come from the playfulness of playing games. The process of Filmswap is actually a kind of particular interpersonal interaction: Filmswap brings “photo jam” instead of “noise” due to the lack of adequate communication between people. In this study, I use in‐depth interviews and questionnaire surveys to collect the on-line information of displaying photos from the people doing Filmswap and to generalize the elements of surprise in Filmswap. Through the research and the understanding of the context from Filmswap players, this study tries to construct a new method of photo narrative in digital era. There are three conclusions of this research can be addressed as follow: First of all, Filmswap combines play and daily life. Secondly, Filmswap brings a new way of interpersonal interaction. Moreover, those unexpected and surprising image elements depict stories of those Filmswap images and create new ways of photo narratives.
58

見鬼了! 電視新聞為何鬼話連篇?-泛靈化電視新聞初探研究 / Why are ghosts on TV? primary study on paranormal TV news in Taiwan

張之穎, Chang, Chih Yin Unknown Date (has links)
本研究以台灣泛靈化電視新聞為題,探究超自然、非視覺的泛靈化新聞,如何藉由重視影像、感官、並呈現自然界事件的電視新聞媒介再現。 本研究結合人類宗教學之泛靈化理論,並以此為主軸,將人類的宗教儀式行動,應用於媒體行為之中。更進一步,觀察各類超自然新聞,包括算命、靈異等新聞題材,如何在電視媒體中,體現巫術式─情緒化、戲劇性的操控。 研究方法採用內容分析法,統計分析泛靈化新聞之呈現框架,因此對台灣泛靈新聞,做了一次初探性統整。並藉由文本分析,進而對泛靈化電視新聞,做深入的文化解剖。 研究發現:(一)泛靈化新聞透過幻想儀式建構一種慾望與迷思,以天意的塑造、關聯化幻想、宣洩情緒的方式,滿足偷窺的慾望。(二)泛靈化新聞的超真幻想,打破新聞本質。(三)縱欲式幻想,滿足低層次的需求。
59

多源遙測影像於海岸變遷之研究 / Coastal changes detection using multi-source remote sensing images

梁平, Liang, Ping Unknown Date (has links)
本研究以不同時期之航遙測影像偵測宜蘭海岸濱線變遷,影像來源包含1947年之舊航照影像、1971年的美國Corona衛星影像、1985年的像片基本圖、2003年的SPOT-5衛星影像及2009年以Z/I DMC(Digital Mapping Camera)航空數位相機所拍攝之高解像力航照影像。 由於影像獲取的時間與感測器皆有所差異,故本研究透過不同的方式處理資料,將影像地理對位,並利用地理資訊系統(Geographic Information Systems, GIS)軟體數化濱線及沙灘(丘),且以套疊分析觀察不同時期濱線與沙灘變遷之情形,最後收集宜蘭地區的自然或人文資料如潮汐、降雨量與輸沙量等,分析宜蘭海岸變遷的原因。而在濱線萃取方面,由於以人工數化方式太耗時間與人力,故嘗試以半自動化方式如影像分類或影像分割萃取濱線,並與人工數化結果比較。研究結果顯示,利用多時期之遙測影像,並結合GIS之空間分析功能,確可有效掌握濱線與沙灘(丘)的歷史變化概況。 / This study used multi-temporal remote sensing images to detect shoreline changes along the Yilan coast. Various types of remote sensing images were used in this study, including old aerial images taken in 1947, Corona satellite images acquired in 1971, photo base map produced in 1985, SPOT-5 satellite images obtained in 2003, and high-resolution aerial images taken in 2009 by using Z/I DMC (Digital Mapping Camera). Because these images were taken in different time using different sensors, different procedures were applied to process the data and georeference the images to a common coordinate system. GIS (Geographic information Systems) software was used to digitize shoreline and the beach area, and overlay analysis was applied to find the shoreline changes in different time periods. Then various ancillary data such as tides, precipitation, and sediment load was collected to analyze the causes of coastal changes in Yilan. For shoreline extraction, manual digitization required a lot of time and manpower. Therefore, semi-automatic method such as image classification and image segmentation was applied to extract shoreline. The results show that, by using multi-temporal remote sensing images and spatial analysis functionalities of GIS, the historical changes of shoreline and beach area can be detected effectively.
60

利用近紅外光影像之近景攝影測量建立數值表面模型之研究 / Construction of digital surface model using Near-IR close range photogrammetry

廖振廷, Liao, Chen Ting Unknown Date (has links)
點雲(point cloud)為以大量三維坐標描述地表實際情形的資料形式,其中包含其三維坐標及相關屬性。通常點雲資料取得方式為光達測量,其以單一波段雷射光束掃描獲取資料,以光達獲取點雲,常面臨掃描時間差、缺乏多波段資訊、可靠邊緣線及角點資訊、大量離散點雲又缺乏語意資訊(semantic information)難以直接判讀及缺乏多餘觀測量等問題。 攝影測量藉由感測反射自太陽光或地物本身放射之能量,可記錄為二維多光譜影像,透過地物在不同光譜範圍表現之特性,可輔助分類,改善分類成果。若匹配多張高重疊率的多波段影像,可以獲取包含多波段資訊且位於明顯特徵點上的點雲,提供光達以外的點雲資料來源。 傳統空中三角測量平差解算地物點坐標及產製數值表面模型(Digital Surface Model, DSM)時,多採用可見光影像為主;而目前常見之高空間解析度數值航照影像,除了記錄可見光波段之外,亦可蒐集近紅外光波段影像。但較少採用近紅外光波段影像,以求解地物點坐標及建立DSM。 因此本研究利用多波段影像所蘊含的豐富光譜資訊,以取像方式簡易及低限制條件的近景攝影測量方式,匹配多張可見光、近紅外光及紅外彩色影像,分別建立可見光、近紅外光及紅外彩色之DSM,其目的在於探討加入近紅外光波段後,所產生的近紅外光及紅外彩色DSM,和可見光DSM之異同;並比較該DSM是否更能突顯植被區。 研究顯示,以可見光點雲為檢核資料,計算近紅外光與紅外彩色點雲的均方根誤差為其距離門檻值之相對檢核方法,可獲得約21%的點雲增加率;然而使用近紅外光或紅外彩色影像,即使能增加點雲資料量,但對於增加可見光影像未能匹配的資料方面,其效果仍屬有限。 / Point cloud represents the surface as mass 3D coordinates and attributes. Generally, these data are usually collected by LIDAR (LIght Detection And Ranging), which acquires data through single band laser scanning. But the data collected by LIDAR could face problems, such as scanning process is not instantaneous, lack of multispectral information, breaklines, corners, semantic information and redundancies. However, photogrammetry record the electromagnetic energy reflected or emitted from the surface as 2D multispectral images, via ground features with different characteristics differ in spectrum, it can be classified more efficiently and precisely. By matching multiple high overlapping multispectral images, point cloud including multispectral information and locating on obvious feature points can be acquired. This provides another point cloud source aparting from LIDAR. In most studies, visible light (VIS) images are used primarily, while calculating ground point coordinates and generating digital surface models (DSM) through aerotriangulation. Although nowadays, high spatial resolution digital aerial images can acquire not only VIS channel, but also near infrared (NIR) channel as well. But there is lack of research doing the former procedures by using NIR images. Therefore, this research focuses on the rich spectral information in multispectral images, by using easy image collection and low restriction close range photogrammetry method. It matches several VIS, NIR and color infrared (CIR) images, and generate DSMs respectively. The purpose is to analyze the difference between VIS, NIR and CIR data sets, and whether it can emphasize the vegetation area, after adding NIR channel in DSM generation. The result shows that by using relative check points between NIR, CIR data with VIS one. First, VIS point cloud was set as check point data, then, the RMSE (Root Mean Square Error) of NIR and CIR point cloud was calculated as distance threshold. Its data increment is 21% ca. However, the point cloud data amount can be increased, by matching NIR and CIR images. But the effect of increasing data, which was not being matched from VIS images are limited.

Page generated in 0.022 seconds