• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 34
  • 17
  • 17
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 161
  • 58
  • 47
  • 36
  • 34
  • 27
  • 27
  • 27
  • 22
  • 21
  • 21
  • 20
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

OCR modul pro rozpoznání písmen a číslic / OCR module for recognition of letters and numbers

Kapusta, Ján January 2010 (has links)
This paper describes basic methods used for optical character recognition. It explains all procedures of recognition from adjustment of picture, processing, feature extracting to matching algorithms. It compares methods and algorithms for character recognition obtained graphically distorted or else modified image so-called „captcha“, used in present. Further it compares method based on invariant moments and neural network as final classifier and method based on correlation between normals and recognized characters.
82

Svět kolem nás jako hyperlink / Local Environment as Hyperlink

Mešár, Marek January 2013 (has links)
Document describes selected techniques and approaches to problem of text detection, extraction and recognition on modern mobile devices. It also describes their proper presentation to the user interface and their conversion to hyperlinks as a source of information about surrounding world. The paper outlines text detection and recognition technique based on MSER detection and also describes the use of image features tracking method for text motion estimation.
83

Zpracování obrazu v zařízení Android - detekce a rozpoznání vizitky / Image processing using Android device - automatic detection and recognition of business cards

Krčmář, Martin January 2016 (has links)
The aim of this Master´s thesis is designing and developing Android application, which will be used for automatic recognition of business cards and import the contact information. The first part describes the history, architecture and development tools of operating system Android. The second part is an analysis of selected computer vision methods that were used during developing application. Libraries OpenCV and Tessaract OCR are described in this part. The main part describes the development of the application with conditions and limitations for the proper function of the application. The final part is an evaluation of the success and recognition of importing contact information from business cards.
84

Är du redo att anta utmaningen?

Ekberg, Ida, Mårtensson, Victoria January 2018 (has links)
Uppsatsen har en kvalitativ ansats genom en flerfallstudie där tolv personer intervjuades.Syftet med studien är att genom semistrukturerade intervjuer förstå tävlingsformen Toughest ur deltagarnas perspektiv. Respondenternas svar har transkriberats och därefter analyserats för att i slutet presenteras i en så kallad korsanalys för att komma fram till ett resultat.Resultatet visar att deltagarna i vår studie både har liknande och olika perspektiv påToughest. Resultatet visar även att det finns ett samband mellan idrottslig bakgrund ochdeltagare i Toughest, då majoriteten i vårt urval har organiserad idrottslig bakgrund. Baserat på intervjuerna har alla deltagare en gemensam tanke där de förklarar Toughest som ett idrottsevenemang som är en utmaning för dem. Dessutom visar resultaten att en del av deltagarna vill delta i Toughest eftersom det är roligt och socialt, var på några av deltagarna har stor passion för hinderbanelopp och ser Toughest som motivation för sin dagliga träning.Slutsatsen med denna uppsats är enligt oss att Toughest är ett unikt event som lockarmänniskor på olika sätt. Deltagarna har olika perspektiv på hur de ser Toughest och derasmotivationsfaktorer skiljer sig åt. Utifrån intervjuerna har vi tolkat att Toughest är en aktivitet som deltagarna i vår studie uttrycker erbjuder det lilla extra och är utöver det vanliga. / The aim of this study is to understand the sport event Toughest through the participants'perspective. To collect relevant empirical we conducted semistructured interviews withtwelve people who had participated and completed the obstacle course race, Toughest.We choose to implement the study on a physical activity-background perspective as wewanted to see if there is a relationship between physical activity background and toparticipate in Toughest.The interviews were performed by both of us and notes were taken. The study includes 12semi structured interviews which were performed at different times. During weekdays andweekends, during the day and evening.The result shows that the participants in our study have both similar and differentperspectives on Toughest. Based on the interviews all the participants has a common thought where they explain Toughest as a sport event that is a challenge for them. Furthermore the results shows that some of the participant want to partake in Toughest because it is fun and social, where as some of the participants have a great passion for obstacle course races and sees Toughest as a motivation for their daily training.
85

Automated Supply-Chain Quality Inspection Using Image Analysis and Machine Learning

Zhu, Yuehan January 2019 (has links)
An image processing method for automatic quality assurance of Ericsson products is developed. The method consists of taking an image of the product, extract the product labels from the image, OCR the product numbers and make a database lookup to match the mounted product with the customer specification. The engineering innovation of the method developed in this report is that the OCR is performed using machine learning techniques. It is shown that machine learning can produce results that are on par or better than baseline OCR methods. The advantage with a machine learning based approach is that the associated neural network can be trained for the specific input images from the Ericsson factory. Imperfections in the image quality and varying type fonts etc. can be handled by properly training the net, a task that would have been very difficult with legacy OCR algorithms where poor OCR results typically need to be mitigated by improving the input image quality rather than changing the algorithm.
86

Rozpoznání kódu z kontrolního obrázku / Code Detection from Control Image

Růžička, Miloslav January 2009 (has links)
Work deals with code detection from control image. The document presents relevant image processing techniques dealing with a noise reduction, thresholding, color models, object segmentation and OCR. This project examines advantages and disadvantages of two selected methods for object segmentation and introduces developed system for object segmentation. The developed system for object segmentation and classification is realized, evaluated and results are discussed in details.
87

Underwater Document Recognition

Shah, Jaimin Nitesh 18 May 2021 (has links)
No description available.
88

Text Segmentation of Historical Degraded Handwritten Documents

Nina, Oliver 05 August 2010 (has links) (PDF)
The use of digital images of handwritten historical documents has increased in recent years. This has been possible through the Internet, which allows users to access a vast collection of historical documents and makes historical and data research more attainable. However, the insurmountable number of images available in these digital libraries is cumbersome for a single user to read and process. Computers could help read these images through methods known as Optical Character Recognition (OCR), which have had significant success for printed materials but only limited success for handwritten ones. Most of these OCR methods work well only when the images have been preprocessed by getting rid of anything in the image that is not text. This preprocessing step is usually known as binarization. The binarization of images of historical documents that have been affected by degradation and that are of poor image quality is difficult and continues to be a focus of research in the field of image processing. We propose two novel approaches to attempt to solve this problem. One combines recursive Otsu thresholding and selective bilateral filtering to allow automatic binarization and segmentation of handwritten text images. The other adds background normalization and a post-processing step to the algorithm to make it more robust and to work even for images that present bleed-through artifacts. Our results show that these techniques help segment the text in historical documents better than traditional binarization techniques.
89

Data Acquisition from Cemetery Headstones

Christiansen, Cameron Smith 27 November 2012 (has links) (PDF)
Data extraction from engraved text is discussed rarely, and nothing in the open literature discusses data extraction from cemetery headstones. Headstone images present unique challenges such as engraved or embossed characters (causing inner-character shadows), low contrast with the background, and significant noise due to inconsistent stone texture and weathering. Current systems for extracting text from outdoor environments (billboards, signs, etc.) make assumptions (i.e. clean and/or consistently-textured background and text) that fail when applied to the domain of engraved text. Additionally, the ability to extract the data found on headstones is of great historical value. This thesis describes a novel and efficient feature-based text zoning and segmentation method for the extraction of noisy text from a highly textured engraved medium. Additionally, the usefulness of constraining a problem to a specific domain is demonstrated. The transcriptions of images zoned and segmented through the proposed system result in a precision of 55% compared to 1% precision without zoning, a 62% recall compared to 39%, an F-measure of 58% compared to 2%, and an error rate of 77% compared to 8303%.
90

Ocr: A Statistical Model Of Multi-engine Ocr Systems

McDonald, Mercedes Terre 01 January 2004 (has links)
This thesis is a benchmark performed on three commercial Optical Character Recognition (OCR) engines. The purpose of this benchmark is to characterize the performance of the OCR engines with emphasis on the correlation of errors between each engine. The benchmarks are performed for the evaluation of the effect of a multi-OCR system employing a voting scheme to increase overall recognition accuracy. This is desirable since currently OCR systems are still unable to recognize characters with 100% accuracy. The existing error rates of OCR engines pose a major problem for applications where a single error can possibly effect significant outcomes, such as in legal applications. The results obtained from this benchmark are the primary determining factor in the decision of implementing a voting scheme. The experiment performed displayed a very high accuracy rate for each of these commercial OCR engines. The average accuracy rate found for each engine was near 99.5% based on a less than 6,000 word document. While these error rates are very low, the goal is 100% accuracy in legal applications. Based on the work in this thesis, it has been determined that a simple voting scheme will help to improve the accuracy rate.

Page generated in 0.0542 seconds