• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 21
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

超媒體文件運算結構之設計與建置研究

張聖武 Unknown Date (has links)
目前以HTML所編輯出的超媒體文件對於多媒體資料在時空關係上缺少提供給文件設計者的精確性及相依性描述,提供給讀者的互動效果也有不足處。為了彌補上述的缺憾,本研究提出超媒體文件的運算結構來處理多媒體物件在時空關係上的描述以及與使用者互動的議題,以物件導向設計方法設計出運算環境類別、一元運算功能類別與多元運算功能類別,並利用Java語言建置出一套腳本描述語言工具來輔助HTML的展現效果。
12

中文資訊擷取結果之錯誤偵測 / Error Detection on Chinese Information Extraction Results

鄭雍瑋, Cheng, Yung-Wei Unknown Date (has links)
資訊擷取是從自然語言文本中辨識出特定的主題或事件的描述,進而萃取出相關主題或事件元素中的對應資訊,再將其擷取之結果彙整至資料庫中,便能將自然語言文件轉換成結構化的核心資訊。然而資訊擷取技術的結果會有錯誤情況發生,若單只依靠人工檢查及更正錯誤的方式進行,將會是耗費大量人力及時間的工作。 在本研究論文中,我們提出字串圖形結構與字串特徵值兩種錯誤資料偵測方法。前者是透過圖形結構比對各資料內字元及字元間關聯,接著由公式計算出每筆資料的比對分數,藉由分數高低可判斷是否為錯誤資料;後者則是利用字串特徵值,來描述字串外表特徵,再透過SVM和C4.5機器學習分類方法歸納出決策樹,進而分類正確與錯誤二元資料。而此兩種偵測方法的差異在於前者隱含了圖學理論之節點位置與鄰點概念,直接比對原始字串內容;後者則是將原始字串轉換成特徵數值,進行分類等動作。 在實驗方面,我們以「總統府人事任免公報」之資訊擷取成果資料庫作為測試資料。實驗結果顯示,本研究所提出的錯誤偵測方法可以有效偵測出不合格的值組,不但能節省驗證資料所花費的成本,甚至可確保高資料品質的資訊擷取成果產出,促使資訊擷取技術更廣泛的實際應用。 / Given a targeted subject and a text collection, information extraction techniques provide the capability to populate a database in which each record entry is a subject instance documented in the text collection. However, even with the state-of-the-art IE techniques, IE task results are expected to contain errors. Manual error detection and correction are labor intensive and time consuming. This validation cost remains a major obstacle to actual deployment of practical IE applications with high validity requirement. In this paper, we propose string graph structure and string feature-based methods. The former takes advantage of graph structure to compare characters and the relation between characters. Next step, we count the corresponding score via formula, and then the scores are takes to estimate the data correctness. The latter uses string features to describe a certain characteristics of each string, after that decision tree is generated by the C4.5 and SVM machine learning algorithms. And then classify the data is valid or not. These two detection methods have the ability to describe the feature of data and verify the correctness further. The difference between these two methods is that, we deal with string of row data directly in the previous method. Besides, it indicates the concept of node position and neighbor node in graphic theory. By contrast, the row string was transformed into feature value, and then be classified in the latter method. In our experiments, we use IE task results of government personnel directives as test data. We conducted experiments to verify that effective detection of IE invalid values can be achieved by using the string graph structure and string feature-based methods. The contribution of our work is to reduce validation cost and enhance the quality of IE results, even provide both analytical and empirical evidences for supporting the effective enhancement of IE results usability as well.
13

基於點群排序關係的特徵描述子建構 / Feature descriptor based on local intensity order relations of pixel group

吳家禎, Wu,Chia Chen Unknown Date (has links)
隨著科技的進步以及網際網路的普及,影像資訊的傳遞已經漸漸取代文字的表達,人們對於影像的需求也越來越多元,使得影像處理技術以及影像資訊分析也就越來越重要。然而,影像中其中一項重要的資訊為特徵描述子,強而有力的描述子能使得影像在辨識、分類等應用上有較佳的回饋,描述子的建構方式根據編碼原則分為:基於區域梯度統計、基於點對關係以及基於點群關係。其中,基於點群關係的編碼方式因為點群的選取及排序過程中,可能會產生過多的關係表示方法數,以至於不利於計算,因此過去較少有利用點群關係的編碼方式所建構而成的特徵描述子。 本論文提出描述子建構方式-LIOR,是以點群排序關係為基礎的編碼方式,相較於LIOP方法隨著點群內的點數增加,元素關係數大幅度的成長,造成描述子維度過大,計算時間和空間皆可能需要大量的消耗,而本研究方法足以改善計算維度的問題,重新定義點群關係的排名機制,並以像素值為基準加入權重分配,以區別加權排序之間不同大小差值所造成的影響程度。 實驗結果顯示本研究方法對於不同影像劣化效果的資料集,不僅能提升選取多點為一群的影像比對評估效能,同時也能改善點群內元素關係過多的排名表示法,降低以多點為群集的特徵描述子維度,節省了影像比對的計算時間以及空間,仍可維持整體影像配對之效能。
14

基於點群排序關係的動態設定特徵描述子建構及優化 / Construction and optimization of feature descriptor based on dynamic local intensity order relations of pixel group

游佳霖, Yu, Carolyn Unknown Date (has links)
隨著智慧型手機的普及,在移動裝置上直接處理圖像的需求也大幅增加,故對於影像特徵描述子的要求,除了要表現出區域特徵的穩健性,同時也要維持良好的特徵比對效率與合理的儲存空間。過去所提出的區域影像特徵描述子建構方法之中,LIOP方法具有相當不錯的表現力,但其特徵描述子維度會隨著點群取樣數量的提高而以倍數增加,因此本研究提出Dynamic Local Intensity Order Relations (DLIOR)特徵描述子建構方法,利用LIOR方法探討點群中點與點之間的關係,減緩其維度增長幅度;透過動態設定像素差距門檻值,處理影像間像素差距分佈不均的問題,並使用線性轉換、點對歐幾里德距離等方式,重新定義描述子欄位的權重設定。經過實驗證實,DLIOR方法能夠使用比LIOP方法更少的維度空間,描述更多點群數的特徵資訊,並且具有更高的特徵比對能力。 / With the popularity of smart phones, the amounts of images being captured and processed on mobile devices have grown significantly in recent years. Image feature descriptors, which play crucial roles in recognition tasks, are expected to exhibit robust matching performance while at the same time maintain reasonable storage requirement. Among the local feature descriptors that have been proposed previously, local intensity order patterns (LIOP) demonstrated superior performance in many benchmark studies. As LIOP encodes the ranking relation in a point set (with N elements), however, its feature dimension increases drastically (N!) with the number of the neighboring sampling points around a pixel. To alleviate the dimensionality issue, this thesis presents a local feature descriptor by considering pairwise intensity relation in a pixel group, thereby reducing feature dimension to the order of C^N_2. In the proposed method, the threshold for assigning order relation is set dynamically according to local intensity distribution. Different weighting schemes, including linear transformation and Euclidean distance, have also been investigated to adjust the contribution of each pairing relation. Ultimately, the dynamic local intensity order relations (DLIOR) is devised to effectively encode intensity order relation of each pixel group. Experimental results indicate that DLIOR consumes less storage space than LIOP but achieves better feature matching performance using benchmark dataset.
15

在Spark大數據平台上分析DBpedia開放式資料:以電影票房預測為例 / Analyzing DBpedia Linked Open Data (LOD) on Spark:Movie Box Office Prediction as an Example

劉文友, Liu, Wen Yu Unknown Date (has links)
近年來鏈結開放式資料 (Linked Open Data,簡稱LOD) 被認定含有大量潛在價值。如何蒐集與整合多元化的LOD並提供給資料分析人員進行資料的萃取與分析,已成為當前研究的重要挑戰。LOD資料是RDF (Resource Description Framework) 的資料格式。我們可以利用SPARQL來查詢RDF資料,但是目前對於大量RDF的資料除了缺少一個高性能且易擴展的儲存和查詢分析整合性系統之外,對於RDF大數據資料分析流程的研究也不夠完備。本研究以預測電影票房為例,使用DBpedia LOD資料集並連結外部電影資料庫 (例如:IMDb),並在Spark大數據平台上進行巨量圖形的分析。首先利用簡單貝氏分類與貝氏網路兩種演算法進行電影票房預測模型實例的建構,並使用貝氏訊息準則 (Bayesian Information Criterion,簡稱BIC) 找到最佳的貝氏網路結構。接著計算多元分類的ROC曲線與AUC值來評估本案例預測模型的準確率。 / Recent years, Linked Open Data (LOD) has been identified as containing large amount of potential value. How to collect and integrate multiple LOD contents for effective analytics has become a research challenge. LOD is represented as a Resource Description Framework (RDF) format, which can be queried through SPARQL language. But large amount of RDF data is lack of a high performance and scalable storage analysis system. Moreover, big RDF data analytics pipeline is far from perfect. The purpose of this study is to exploit the above research issue. A movie box office sale prediction scenario is demonstrated by using DBpedia with external IMDb movie database. We perform the DBpedia big graph analytics on the Apache Spark platform. The movie box office prediction for optimal model selection is first evaluated by BIC. Then, Naïve Bayes and Bayesian Network optimal model’s ROC and AUC values are obtained to justify our approach.
16

服務導向架構網路服務整合金融資產帳戶之研究

張宏斌, Chang , Hung Pin Unknown Date (has links)
由於網際網路的興起,改變了使用者的消費習慣,顧客可以透過網路使用金融服務,也使得越來越多的金融機構投入發展網路銀行,提供顧客一次購足服務(One-Stop-Service/Shopping)、多樣化金融服務通道(Service Channel)及24小時全年無休服務等金融交易方式。但是,使用者對於網路銀行仍有安全上的考慮,擔心網站業者盜用其帳號或駭客入侵等問題。另一方面,由於各家銀行的帳戶資料與其它家銀行不相同,資料標準不一致,以致難以動態整合帳戶,遲遲無法提供整合帳戶的報表。 本研究目的在解決上述問題,透過建立服務導向架構的網路服務平台,整合使用者的資產帳戶。金融機構將網路服務發佈到註冊中心(UDDI),透過本研究的共通平台針對帳戶欄位名稱合併,由顧客取得實際帳戶明細資料對應成彙整報表。此設計架構能確保帳戶資訊是金融機構直接傳遞給客戶端,無須經過任何第三者之介入,又能解決各家銀行欄位名稱不一致的問題。 歸納本研究成果在於:(1)透過XML和網路服務等相關技術,達成金融機構資訊交換。(2)採用服務導向架構,動態搜尋服務與動態配置帳戶欄位,達到金融跨行帳戶彙整。(3)提供整合性客戶端帳戶整合報表。 / The rising of the Internet has changed user-consuming behaviors, and the customers could use the financial services with the Internet. Therefore, more and more financial institutions develop their banking websites to provide their customers One-Stop-Service/Shopping, Service Channels, and service around the year. Still the customers take the website security into consideration, such as account misappropriation by the website broker, hacker invasion, and so on. Besides, account data of a bank are different from the others, so that it is hard to integrate their account to provide entire aggregate account reports. This thesis applies the Services Oriented Architecture of Web Service and integrates customers’ asset account to resolve the above problems. The financial institutions publish their web services into Universal, Description, Discovery and Integration, and the customers can receive their account financial data to present aggregate account report through the combination of the account attributes name within the common platform. This design framework can ensure that the financial institutions will transfer account financial data to the end clients without other brokers. The contributions of this thesis are (1) the accomplishment of the data exchange in financial institutions through XML, Web Services, etc., (2) the adoption of Services Orient Architecture to dynamically search services and allocate account attributes in order to integrate the cross banking accounts, and (3) the supply of the aggregate accounts for end clients.
17

以靜態織入方法實現剖面導向工作流程 / Design and Implementation of a Static Weaver for Aspectual Workflow

許朝傑, Hsu, Chao Chieh Unknown Date (has links)
現今的應用系統中常會有橫跨性(Cross-Cutting)的程式模組存在,這類程式模組包括:日誌記錄、授權認證、資料永存性等,而這種程式模組在系統中若沒有被區分抽離出來,常常會導致系統重複出現與主要功能需求無關的程式碼,除此之外,這些橫跨性需求的程式碼還會與主要功能需求程式碼糾結在一起,造成程式碼夾雜不清的現象。在工作流程(Workflow)的開發過程中也有著相同的問題。為了解決上述的問題,本研究以JBoss jBPM(Java Business Process Manage- ment)為基礎平台,將剖面導向程式設計(Aspect-Oriented Progra- mming)的觀念與技術運用在工作流程的領域中,使流程設計人員能夠利用AOP的方式來解決橫跨性需求的模組化問題,並且利用靜態織入方法,改善jBPM工作流程引擎進行剖面流程在織入時的效能。 / Cross-cutting concerns are those system design issues that cut across the various modules of an application and are typically foundational system services that we need to consider before diving into building an application. Most common among these are logging, authentication, authorization, and persistence. Cross-cutting concerns always cause program code to be scattered and tangled, and therefore make it harder to understand and maintain. Similar problem also occurs in the field of workflow. In our research, we apply the concept of aspect-oriented programming(AOP) to the field of workflow system, and implement a static weaver for jBPM. With static weaver, process designer can use the facilities of AOP, and the performance is also improved via static weaving .
18

應用共變異矩陣描述子及半監督式學習於行人偵測 / Semi-supervised learning for pedestrian detection with covariance matrix feature

黃靈威, Huang, Ling Wei Unknown Date (has links)
行人偵測為物件偵測領域中一個極具挑戰性的議題。其主要問題在於人體姿勢以及衣著服飾的多變性,加之以光源照射狀況迥異,大幅增加了辨識的困難度。吾人在本論文中提出利用共變異矩陣描述子及結合單純貝氏分類器與級聯支持向量機的線上學習辨識器,以增進行人辨識之正確率與重現率。 實驗結果顯示,本論文所提出之線上學習策略在某些辨識狀況較差之資料集中能有效提升正確率與重現率達百分之十四。此外,即便於相同之初始訓練條件下,在USC Pedestrian Detection Test Set、 INRIA Person dataset 及 Penn-Fudan Database for Pedestrian Detection and Segmentation三個資料集中,本研究之正確率與重現率亦較HOG搭配AdaBoost之行人辨識方式為優。 / Pedestrian detection is an important yet challenging problem in object classification due to flexible body pose, loose clothing and ever-changing illumination. In this thesis, we employ covariance feature and propose an on-line learning classifier which combines naïve Bayes classifier and cascade support vector machine (SVM) to improve the precision and recall rate of pedestrian detection in a still image. Experimental results show that our on-line learning strategy can improve precision and recall rate about 14% in some difficult situations. Furthermore, even under the same initial training condition, our method outperforms HOG + AdaBoost in USC Pedestrian Detection Test Set, INRIA Person dataset and Penn-Fudan Database for Pedestrian Detection and Segmentation.
19

臺灣檔案典藏單位口述歷史館藏整理與運用 / Organization and access of oral history collection in archival repositories of Taiwan

顏佩貞, Yen, Pei Chen Unknown Date (has links)
近年民間檔案逐漸受到檔案館的青睞,成為檔案館徵集的範圍之一,而口述歷史也屬於其中一部份。口述歷史會產生錄音帶、錄影帶、電子檔、訪談抄本、受訪者捐贈資料等,種類相當繁雜,而臺灣各檔案典藏單位進行口述歷史之後,有不同的整理與保存方式。希望藉此研究瞭解臺灣各檔案典藏單位是如何整理與運用口述歷史館藏,並提出一個更完善的整理與管理機制、提供更多元的運用方式。 本研究採用文獻分析法、深度訪談法與比較研究法,探討新加坡、澳洲、美國、加拿大、英國、香港等地的作法,並實際訪談中研院近史所、國史館、國史館臺灣文獻館、北市文獻會、宜蘭獻史館、臺大校史館、海大校史室、清大校史館等臺灣檔案典藏單位。 根據研究結果,提出結論如下:一、各典藏單位從事口述歷史主要有三項目的,分別為學術研究、史料蒐集、院史纂修或校史纂修;二、各典藏單位整理口述歷史館藏的人員不足;三、口述歷史館藏主要依據載體類型分開典藏;四、只有部分典藏單位將口述歷史館藏進行簡單建檔;五、提供利用的口述歷史館藏多屬已修改過的定稿;六、口述歷史館藏的運用類型偏少。 針對上述研究結論,提出六項建議:一、依據檔案載體典藏口述歷史館藏;二、加速口述歷史館藏的簡編,以提供利用;三、進行口述歷史館藏的內容描述;四、建立口述歷史館藏的智能編排,將相關資料連結;五、擴大口述歷史館藏的運用層面;六、建置全國的口述歷史資料庫。
20

科技政策網站內容分析之研究

賴昌彥, Lai, Chang-Yen Unknown Date (has links)
面對全球資訊網(WWW)應用蓬勃發展,網際網路上充斥著各種類型的資訊資源。而如何有效地管理及檢索這些資料,就成為當前資訊管理的重要課題之一。在發掘資訊時,最常用的便是搜尋引擎,透過比對查詢字串與索引表格(index table),找出相關的網頁文件,並回傳結果。但因為網頁描述資訊的不足,導致其回覆大量不相關的查詢結果,浪費使用者許多時間。 為了解決上述問題,就資訊搜尋的角度而言,本研究提出以文字開採技術實際分析網頁內容,並將其轉換成維度資訊來描述,再以多維度資料庫方式儲存的架構。做為改進現行資訊檢索的參考架構。 就資訊描述的角度,本研提出採用RDF(Resource Description Framework)來描述網頁Metadata的做法。透過此通用的資料格式來描述網路資源,做為跨領域使用、表達資訊的標準,便於Web應用程式間的溝通。期有效改善現行網際網路資源描述之缺失,大幅提昇搜尋之品質。

Page generated in 0.0505 seconds