Spelling suggestions: "subject:"0ptical character recognition"" "subject:"0ptical haracter recognition""
91 |
Handwriting Chinese character recognition based on quantum particle swarm optimization support vector machinePang, Bo January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
|
92 |
A Book Reader Design for Persons with Visual Impairment and BlindnessGalarza, Luis E. 16 November 2017 (has links)
The objective of this dissertation is to provide a new design approach to a fully automated book reader for individuals with visual impairment and blindness that is portable and cost effective. This approach relies on the geometry of the design setup and provides the mathematical foundation for integrating, in a unique way, a 3-D space surface map from a low-resolution time of flight (ToF) device with a high-resolution image as means to enhance the reading accuracy of warped images due to the page curvature of bound books and other magazines. The merits of this low cost, but effective automated book reader design include: (1) a seamless registration process of the two imaging modalities so that the low resolution (160 x 120 pixels) height map, acquired by an Argos3D-P100 camera, accurately covers the entire book spread as captured by the high resolution image (3072 x 2304 pixels) of a Canon G6 Camera; (2) a mathematical framework for overcoming the difficulties associated with the curvature of open bound books, a process referred to as the dewarping of the book spread images, and (3) image correction performance comparison between uniform and full height map to determine which map provides the highest Optical Character Recognition (OCR) reading accuracy possible. The design concept could also be applied to address the challenging process of book digitization. This method is dependent on the geometry of the book reader setup for acquiring a 3-D map that yields high reading accuracy once appropriately fused with the high-resolution image. The experiments were performed on a dataset consisting of 200 pages with their corresponding computed and co-registered height maps, which are made available to the research community (cate-book3dmaps.fiu.edu). Improvements to the characters reading accuracy, due to the correction steps, were quantified and measured by introducing the corrected images to an OCR engine and tabulating the number of miss-recognized characters. Furthermore, the resilience of the book reader was tested by introducing a rotational misalignment to the book spreads and comparing the OCR accuracy to those obtained with the standard alignment. The standard alignment yielded an average reading accuracy of 95.55% with the uniform height map (i.e., the height values of the central row of the 3-D map are replicated to approximate all other rows), and 96.11% with the full height maps (i.e., each row has its own height values as obtained from the 3D camera). When the rotational misalignments were taken into account, the results obtained produced average accuracies of 90.63% and 94.75% for the same respective height maps, proving added resilience of the full height map method to potential misalignments.
|
93 |
Hybrid segmentation on slant & skewed deformation text in natural scene images / Hybrid segmentation on slant and skewed deformation text in natural scene imagesFei, Xiao Lei January 2010 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
|
94 |
Gabor filter parameter optimization for multi-textured images : a case study on water body extraction from satellite imagery.Pillay, Maldean. January 2012 (has links)
The analysis and identification of texture is a key area in image processing and computer
vision. One of the most prominent texture analysis algorithms is the Gabor Filter.
These filters are used by convolving an image with a family of self similar filters or
wavelets through the selection of a suitable number of scales and orientations, which
are responsible for aiding in the identification of textures of differing coarseness and
directions respectively.
While extensively used in a variety of applications, including, biometrics such as iris and
facial recognition, their effectiveness depend largely on the manual selection of different
parameters values, i.e. the centre frequency, the number of scales and orientations, and
the standard deviations. Previous studies have been conducted on how to determine
optimal values. However the results are sometimes inconsistent and even contradictory.
Furthermore, the selection of the mask size and tile size used in the convolution process
has received little attention, presumably since they are image set dependent.
This research attempts to verify specific claims made in previous studies about the
influence of the number of scales and orientations, but also to investigate the variation of
the filter mask size and tile size for water body extraction from satellite imagery. Optical
satellite imagery may contain texture samples that are conceptually the same (belong
to the same class), but are structurally different or differ due to changes in illumination,
i.e. a texture may appear completely different when the intensity or position of a light
source changes.
A systematic testing of the effects of varying the parameter values on optical satellite
imagery is conducted. Experiments are designed to verify claims made about the influence of varying the scales and orientations within predetermined ranges, but also to
show the considerable changes in classification accuracy when varying the filter mask
and tile size. Heuristic techniques such as Genetic Algorithms (GA) can be used to find
optimum solutions in application domains where an enumeration approach is not feasible.
Hence, the effectiveness of a GA to automate the process of determining optimum
Gabor filter parameter values for a given image dataset is also investigated.
The results of the research can be used to facilitate the selection of Gabor filter parameters
for applications that involve multi-textured image segmentation or classification,
and specifically to guide the selection of appropriate filter mask and tile sizes for automated
analysis of satellite imagery. / Thesis (M.Sc.)-University of KwaZulu-Natal, Durban, 2012.
|
95 |
Voice input for the disabled /Holmes, William Paul. January 1987 (has links) (PDF)
Thesis (M. Eng. Sc.)--University of Adelaide, 1987. / Typescript. Includes a copy of a paper presented at TADSEM '85 --Australian Seminar on Devices for Expressive Communication and Environmental Control, co-authored by the author. Includes bibliographical references (leaves [115-121]).
|
96 |
A new class of convolutional neural networks based on shunting inhibition with applications to visual pattern recognitionTivive, Fok Hing Chi. January 2006 (has links)
Thesis (Ph.D.)--University of Wollongong, 2006. / Typescript. Includes bibliographical references: leaf 208-226.
|
97 |
API för att tolka och ta fram information från kvittonSanfer, Jonathan January 2018 (has links)
Denna rapport redogör för skapandet av ett API som kan extrahera information från bilder på kvitton. Informationen som APIet skulle kunna ta fram var organisationsnummer, datum, tid, summa och moms. Här ingår även en fördjupning om tekniken OCR (optical character recognition) som omvandlar bilder och dokument till text. Examensarbetet utfördes åt Flex Applications AB. Examensarbetet utfördes åt Flex Applications AB. / This report describes the creation of an API that can extract information from pictures of receipts. Registration number, date, time, sum and tax are the information that the API was going to be able to deliver. In this thesis there is also a deepening of the technology OCR (optical character recognition) that transforms pictures and documents to text. The thesis was performed for Flex Applications AB.
|
98 |
A Possibilistic Approach To Handwritten Script Identification Via Morphological Methods For Pattern RepresentationGhosh, Debashis 04 1900 (has links) (PDF)
No description available.
|
99 |
Detekce objektu ve videosekvencích / Object Detection in Video SequencesŠebela, Miroslav January 2010 (has links)
The thesis consists of three parts. Theoretical description of digital image processing, optical character recognition and design of system for car licence plate recognition (LPR) in image or video sequence. Theoretical part describes image representation, smoothing, methods used for blob segmentation and proposed are two methods for optical character recognition (OCR). Concern of practical part is to find solution and design procedure for LPR system included OCR. The design contain image pre-processing, blob segmentation, object detection based on its properties and OCR. Proposed solution use grayscale trasformation, histogram processing, thresholding, connected component,region recognition based on its patern and properties. Implemented is also optical recognition method of licence plate where acquired values are compared with database used to manage entry of vehicles into object.
|
100 |
OCR of hand-written transcriptions of hieroglyphic textNederhof, Mark-Jan January 2016 (has links)
Encoding hieroglyphic texts is time-consuming. If a text already exists as hand-written transcription, there is an alternative, namely OCR. Off-the-shelf OCR systems seem difficult to adapt to the peculiarities of Ancient Egyptian. Presented is a proof-of-concept tool that was designed to digitize texts of Urkunden IV in the hand-writing of Kurt Sethe. It automatically recognizes signs and produces a normalized encoding, suitable for storage in a database, or for printing on a screen or on paper, requiring little manual correction.
The encoding of hieroglyphic text is RES (Revised Encoding Scheme) rather than (common dialects of) MdC (Manuel de Codage). Earlier papers argued against MdC and in favour of RES for corpus development. Arguments in favour of RES include longevity of the encoding, as its semantics are font-independent. The present study provides evidence that RES is also much preferable to MdC in the context of OCR. With a well-understood parsing technique, relative positioning of scanned signs can be straightforwardly mapped to suitable primitives of the encoding.
|
Page generated in 0.1048 seconds