• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatic Liver and Tumor Segmentation from CT Scan Images using Gabor Feature and Machine Learning Algorithms

Shrestha, Ujjwal 19 December 2018 (has links)
No description available.
12

Comparative study of table layout analysis : Layout analysis solutions study for Swedish historical hand-written document

Liang, Xusheng January 2019 (has links)
Background. Nowadays, information retrieval system become more and more popular, it helps people retrieve information more efficiently and accelerates daily task. Within this context, Image processing technology play an important role that help transcribing content in printed or handwritten documents into digital data in information retrieval system. This transcribing procedure is called document digitization. In this transcribing procedure, image processing technique such as layout analysis and word recognition are employed to segment the document content and transcribe the image content into words. At this point, a Swedish company (ArkivDigital® AB) has a demand to transcribe their document data into digital data. Objectives. In this study, the aim is to find out effective solution to extract document layout regard to the Swedish handwritten historical documents, which are featured by their tabular forms containing the handwritten content. In this case, outcome of application of OCRopus, OCRfeeder, traditional image processing techniques, machine learning techniques on Swedish historical hand-written document is compared and studied. Methods. Implementation and experiment are used to develop three comparative solutions in this study. One is Hessian filtering with mask operation; another one is Gabor filtering with morphological open operation; the last one is Gabor filtering with machine learning classification. In the last solution, different alternatives were explored to build up document layout extraction pipeline. Hessian filter and Gabor filter are evaluated; Secondly, filter images with the better filter evaluated at previous stage, then refine the filtered image with Hough line transform method. Third, extract transfer learning feature and custom feature. Fourth, feed classifier with previous extracted features and analyze the result. After implementing all the solutions, sample set of the Swedish historical handwritten document is applied with these solutions and compare their performance with survey. Results. Both open source OCR system OCRopus and OCRfeeder fail to deliver the outcome due to these systems are designed to handle general document layout instead of table layout. Traditional image processing solutions work in more than a half of the cases, but it does not work well. Combining traditional image process technique and machine leaning technique give the best result, but with great time cost. Conclusions. Results shows that existing OCR system cannot carry layout analysis task in our Swedish historical handwritten document. Traditional image processing techniques are capable to extract the general table layout in these documents. By introducing machine learning technique, better and more accurate table layout can be extracted, but comes with a bigger time cost. / Scalable resource-efficient systems for big data analytics
13

Extraction and Application of Secondary Crease Information in Fingerprint Recognition Systems

Hymér, Pontus January 2005 (has links)
<p>This thesis states that cracks and scars, referred to as Secondary Creases, in fingerprint images can be used as means for aiding and complementing fingerprint recognition, especially in cases where there is not enough clear data to use traditional methods such as minutiae based or correlation techniques. A Gabor filter bank is used to extract areas with linear patterns, where after the Hough Transform is used to identify secondary creases in a r, theta space. The methods proposed for Secondary Crease extraction works well, and provides information about what areas in an image contains usable linear pattern. Methods for comparison is however not as robust, and generates False Rejection Rate at 30% and False Acceptance Rate at 20% on the proposed dataset that consists of bad quality fingerprints. In short, our methods still makes it possible to make use of fingerprint images earlier considered unusable in fingerprint recognition systems.</p>
14

Extraction and Application of Secondary Crease Information in Fingerprint Recognition Systems

Hymér, Pontus January 2005 (has links)
This thesis states that cracks and scars, referred to as Secondary Creases, in fingerprint images can be used as means for aiding and complementing fingerprint recognition, especially in cases where there is not enough clear data to use traditional methods such as minutiae based or correlation techniques. A Gabor filter bank is used to extract areas with linear patterns, where after the Hough Transform is used to identify secondary creases in a r, theta space. The methods proposed for Secondary Crease extraction works well, and provides information about what areas in an image contains usable linear pattern. Methods for comparison is however not as robust, and generates False Rejection Rate at 30% and False Acceptance Rate at 20% on the proposed dataset that consists of bad quality fingerprints. In short, our methods still makes it possible to make use of fingerprint images earlier considered unusable in fingerprint recognition systems.
15

Vylepšení obrazu z ultrazvuku pro vizuální diagnostiku / Visual Enhancement of Ultrasound Images

Vaňhara, Jaromír January 2011 (has links)
Ultrasound imaging is widely used in medical examination. However, the interpretation of images is not trivial and requires much experience. In this thesis, various techniques for enhancement of visual quality of ultrasound images are presented. Several basic and advanced methods that may simplify the visual diagnosis are described. Finally, an interactive application is designed and implemented for simple usage of presented methods.
16

Fazifikacija Gaborovog filtra i njena primena u detekciji registarskih tablica / Fuzzification of Gabor Filter for License Plate Detection Application

Tadić Vladimir 06 June 2018 (has links)
<p>Disertacija prikazuje novi algoritam za detekciju i izdvajanje registarskih tablica iz slike vozila koristeći fazi 2D Gaborov filtar. Parametri filtra: orijentacija i talasna dužina su fazifikovani u cilju optimizacije odziva Gaborovog filtra i postizanja dodatne selektivnosti filtra. Prethodno navedeni parametri dominiraju u rezultatu filtriranja. Bellova i trougaona funkcija pripadnosti pokazale su se kao najbolji izbor pri fazifikaciji parametara filtra. Algoritam je evaluiran nad vi&scaron;e baza slika i postignuti su zadovoljavajući rezultati. Komponente od interesa su efikasno izdvojene i postignuta značajna otpornost na &scaron;um i degradaciju na slici.</p> / <p>The thesis presents a new algorithm for detection and extraction of license plates from a vehicle image using a fuzzy two-dimensional Gabor filter. The filter parameters, orientation and wavelengths are fuzzified to optimize the Gabor filter&rsquo;s response and achieve a greater selectivity. It was concluded that Bell&rsquo;s function and triangular membership function are the most efficient methods for fuzzification. Algorithm was evaluated on several databases and has provided satisfactory results. The components of interest were efficiently extracted, and the procedure was found to be very noise-resistant.</p>
17

Analysis Of Multi-lingual Documents With Complex Layout And Content

Pati, Peeta Basa 11 1900 (has links)
A document image, beside text, may contain pictures, graphs, signatures, logos, barcodes, hand-drawn sketches and/or seals. Further, the text blocks in an image may be in Manhattan or any complex layout. Document Layout Analysis is an important preprocessing step before subjecting any such image to OCR. Here, the image with complex layout and content is segmented into its constituent components. For many present day applications, separating the text from the non-text blocks is sufficient. This enables the conversion of the text elements present in the image to their corresponding editable form. In this work, an effort has been made to separate the text areas from the various kinds of possible non-text elements. The document images may have been obtained from a scanner or camera. If the source is a scanner, there is control on the scanning resolution, and lighting of the paper surface. Moreover, during the scanning process, the paper surface remains parallel to the sensor surface. However, when an image is obtained through a camera, these advantages are no longer available. Here, an algorithm is proposed to separate the text present in an image from the clutter, irrespective of the imaging technology used. This is achieved by using both the structural and textural information of the text present in the gray image. A bank of Gabor filters characterizes the statistical distribution of the text elements in the document. A connected component based technique removes certain types of non-text elements from the image. When a camera is used to acquire document images, generally, along with the structural and textural information of the text, color information is also obtained. It can be assumed that text present in an image has a certain amount of color homogeneity. So, a graph-theoretical color clustering scheme is employed to segment the iso-color components of the image. Each iso-color image is then analyzed separately for its structural and textural properties. The results of such analyses are merged with the information obtained from the gray component of the image. This helps to separate the colored text areas from the non-text elements. The proposed scheme is computationally intensive, because the separation of the text from non-text entities is performed at the pixel level Since any entity is represented by a connected set of pixels, it makes more sense to carry out the separation only at specific points, selected as representatives of their neighborhood. Harris' operator evaluates an edge-measure at each pixel and selects pixels, which are locally rich on this measure. These points are then employed for separating text from non-text elements. Many government documents and forms in India are bi-lingual or tri-lingual in nature. Further, in school text books, it is common to find English words interspersed within sentences in the main Indian language of the book. In such documents, successive words in a line of text may be of different scripts (languages). Hence, for OCR of these documents, the script must be recognized at the level of words, rather than lines or paragraphs. A database of about 20,000 words each from 11 Indian scripts1 is created. This is so far the largest database of Indian words collected and deployed for script recognition purpose. Here again, a bank of 36 Gabor filters is used to extract the feature vector which represents the script of the word. The effectiveness of Gabor features is compared with that of DCT and it is found that Gabor features marginally outperform the DOT. Simple, linear and non-linear classifiers are employed to classify the word in the feature space. It is assumed that a scheme developed to recognize the script of the words would work equally fine for sentences and paragraphs. This assumption has been verified with supporting results. A systematic study has been conducted to evaluate and compare the accuracy of various feature-classifier combinations for word script recognition. We have considered the cases of bi-script and tri-script documents, which are largely available. Average recognition accuracies for bi-script and tri-script cases are 98.4% and 98.2%, respectively. A hierarchical blind script recognizer, involving all eleven scripts has been developed and evaluated, which yields an average accuracy of 94.1%. The major contributions of the thesis are: • A graph theoretic color clustering scheme is used to segment colored text. • A scheme is proposed to separate text from the non-text content of documents with complex layout and content, captured by scanner or camera. • Computational complexity is reduced by performing the separation task on a selected set of locally edge-rich points. • Script identification at word level is carried out using different feature classifier combinations. Gabor features with SVM classifier outperforms any other feature-classifier combinations. A hierarchical blind script recognition algorithm, involving the recognition of 11 Indian scripts, is developed. This structure employs the most efficient feature-classifier combination at each individual nodal point of the tree to maximize the system performance. A sequential forward feature selection algorithm is employed to. select the most discriminating features, in a case by case basis, for script-recognition. The 11 scripts are Bengali, Devanagari, Gujarati, Kannada, Malayalam, Odiya, Puniabi, Roman. Tamil, Telugu and Urdu.
18

從作者與發明人的關係探討技術發展各階段論文與專利活動之關聯性──以電腦視覺領域之賈伯濾波器技術為例 / Discovering the Relationship between Publishing and Patenting Activities from the Relatedness of Authors and Inventors over the Life Cycles of Technological Development── Case Study of Gabor Filter in Computer Vision

許舜棋, Hsu, Shun Chi Unknown Date (has links)
在技術快速變遷的環境中,如何迅速掌握與研發相關的情報以協助研發決策的制訂,已經成為企業重要的競爭優勢來源。近年來,由於電腦運算能力的快速提昇,使用電腦輔助企業自動、快速地從大量增加的科技資訊(特別是專利和論文)中淬取出攸關的資訊,就成為了近年來產業界和學術界積極研究的目標。 在眾多方法中,使用書目計量分析和專利分析方法是最引人注目的方法之一。使用書目計量分析和專利分析可以從龐大的論文和專利資訊中,快速瞭解科技發展的動態:包括瞭解科技發展的階段為何,熱門的科技領域為何,重要的作者和企業為何等等。然而,現階段的書目計量分析和專利分析雖然可以協助瞭解科技發展的全貌,對於科技發展下技術發明活動與科學研究活動的關聯性,以及不同的科技發展階段裡發明人和作者的動態關係,卻仍然缺少相關的研究。 因此,本研究提出以下三點研究問題: 1. 不同類型的論文作者和專利發明人的科學研究/技術發明活動,與技術發展階段的關聯性為何? 2. 發明作者的技術發明/科學研究活動與一般發明人或作者的差異為何? 3. 發明作者的技術發明活動與科學研究活動關係為何? 針對以上的研究問題,本研究首先通過回顧相關文獻以建立分析發明人和作者的研究架構,再蒐集專利和論文的資料並依照架構的需要處理資料,最後進行分析與討論以得到研究結論。 本研究主要獲得以下三點研究結論: 1. 天才發明人是技術發展處於萌芽期時專利發明的要角,而關鍵發明人大多在技術發展進入成長期時才投入專利發明。至於頂尖作者,則在技術發展的萌芽期、成長期和成熟期都是論文發表的要角。 2. 關鍵發明人有很高的機會是頂尖作者,而發明作者如果不是關鍵發明人,則其專利發明的表現有略高的機會較其他發明人更差。 3. 大部份發明作者的專利發明活動在論文發表活動之後;但是關鍵發明人則較傾向先申請專利,再發表主題高度相關的論文。 / Mining information to improve corporate R&D decision making had been an important source of competitive advantage in the rapid changing technological environment. Recently, extracting relevant information quickly and automatically from massive amount of technological data (especially patent and scientific publications) with the aide of computer had become an active research area for both industrial and academic researchers due to ever-growing computing power. Among the methods of retrieving technological information, bibliometrics and patent analysis are two of the most attractive ones. Bibliometrics and patent analysis provide a quick way to capture the dynamics of technological development, including the stage of technological development, active technological research area and important researchers/corporates, etc. Although bibliometrics and patent analysis are helpful to understand the landscape of technological development, there still lacks researches about the relationship between scientific invention and research activities as well as the dynamics between patent inventors and publication authors along different stages of technological development. Hence, this research raises the following questions: 1. What is the relation between scientific research/invention activities and technological development stages for different categories of publication authors and patent inventors? 2. What is the difference of scientific research/invention activities between Inventor-Authors and other inventors/authors? 3. What is the relation between scientific research and invention activities of Inventor-Authors? This research reviews related researches to define a research framework connecting authors, inventors and technological development stages. Then patent and publication data are collected and processed based on the research framework. This research conclusion is made after analysis and discussion. Conclusion of the research includes the followings: 1. "Talent Inventors" play important role when the technological development is in "Emerging" stage, and "Key Inventors" starts patent inventions after the technological development enters "Growth" stage. "Top Authors" play important role across "Emerging", "Growth" and "Maturity" stages of technological development. 2. "Key Inventors" are more probable to be also "Top Author". "Inventor-Authors" who are not "Key Inventors" are more probably to perform worse than other inventors. 3. Most "Inventor-Authors" apply for patents after papers of highly related topics are published. But "Key Inventors" tend to apply for patents before papers of highly related topics are published.
19

Generátor otisků prstů / Fingerprints Generator

Chaloupka, Radek Unknown Date (has links)
Algorithms for fingerprints recognition are already known for long time and there is also an effort for their best optimization. This master's thesis is dealing with an opposite approach, where the fingerprints are not being recognized, but are generated on the minutiae position basis. Such algorithm is then free of the minutiae detection from image and enhancements of fingerprints. Results of this work are the synthetic images generated according to few given parameters, especially minutiae.

Page generated in 0.0605 seconds