• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

雲端筆記之混合式文字切割與辨識 / Segmentation and recognition of mixed characters for cloud-based notes

王冠智, Wang, Guan Jhih Unknown Date (has links)
文字辨識為常見的電腦視覺應用之一,隨著正確率逐漸的上升,許多新的服務相繼出現,本論文改善了筆記管理軟體最主要的問題-文字切割,並提出兩種新的中文印刷體及手寫體的分類方法。我們將筆記文件中較常見的重點標記過濾後,再使用新核心的文字結構濾波取得筆記文件中的文字區塊,新的核心數據大幅降低原始核心的計算時間。本論文也使用文字結構濾波作為分辨印刷體、手寫體的特徵值,由於文字結構濾波會依據筆畫結構給予能量回饋,使得較工整的印刷體與手寫體能有所區別,此外也使用Sobel搭配不同角度範圍進行字體辨識,實驗結果證實了本論文所提出的文字切割及字體分類方法對於筆記文件資訊的處理是有效的。 / Character recognition is an important and practical application of computer vision. With the advance of this technology, more and more services embedding text recognition functionality have become available. However, segmentation is still the central issue in many situations. In this thesis, we tackle the character segmentation problem in note taking and management applications. We propose novel methods for the discrimination of handwritten and machine-printed Chinese characters. First, we perform noise removal using heuristics and apply a stroke filter with modified kernels to efficiently compute the bounding box for the text area. The responses of the stroke filter also serve as clues for differentiating machine-printed and handwritten texts. They are further enhanced using a SVM-based classifier that employs aggregated directional responses of edge detectors as input. Experiment results have validated the efficacy of the proposed approaches in terms of text localization and style recognition.
2

Text Localization for Unmanned Ground Vehicles

Kirchhoff, Allan Richard 16 October 2014 (has links)
Unmanned ground vehicles (UGVs) are increasingly being used for civilian and military applications. Passive sensing, such as visible cameras, are being used for navigation and object detection. An additional object of interest in many environments is text. Text information can supplement the autonomy of unmanned ground vehicles. Text most often appears in the environment in the form of road signs and storefront signs. Road hazard information, unmapped route detours and traffic information are available to human drivers through road signs. Premade road maps lack these traffic details, but with text localization the vehicle could fill the information gaps. Leading text localization algorithms achieve ~60% accuracy; however, practical applications are cited to require at least 80% accuracy [49]. The goal of this thesis is to test existing text localization algorithms against challenging scenes, identify the best candidate and optimize it for scenes a UGV would encounter. Promising text localization methods were tested against a custom dataset created to best represent scenes a UGV would encounter. The dataset includes road signs and storefront signs against complex background. The methods tested were adaptive thresholding, the stroke filter and the stroke width transform. A temporal tracking proof of concept was also tested. It tracked text through a series of frames in order to reduce false positives. Best results were obtained using the stroke width transform with temporal tracking which achieved an accuracy of 79%. That level of performance approaches requirements for use in practical applications. Without temporal tracking the stroke width transform yielded an accuracy of 46%. The runtime was 8.9 seconds per image, which is 44.5 times slower than necessary for real-time object tracking. Converting the MATLAB code to C++ and running the text localization on a GPU could provide the necessary speedup. / Master of Science

Page generated in 0.0431 seconds