• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 34
  • 17
  • 16
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 154
  • 56
  • 44
  • 35
  • 34
  • 27
  • 26
  • 26
  • 21
  • 20
  • 19
  • 19
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Framtagning av prototyp för att läsa och dokumentera kundspecifikationer

Larsson, Anders January 2006 (has links)
To increase the quality in their products ABB is working towards a cleared order concept. That means that all customer specified options are to be known before they start with order calculations and construction. As it is today they use paper checklists to achieve this. One order may have several reactors, and for a reactor there can be several different alternatives. For each alternative a new checklist must be filled out.As of today all reading of the customer specification and checklist fill in with is done by hand by different persons, and sometimes the same data is read more than one time. All data is also manually inserted into the calculation tools.To decrease the risk that data is left out or gets distorted they want to have a tool to aid with the reading of the specification and the documentation of that work. Already read data can be copied over to another alternative so that it not must be read one more time. The read data are to be stored in a database so that it easily can be per automation inserted into the different design tools.
2

A software toolkit for handprinted form readers

Cracknell, Christopher Robert William January 1999 (has links)
No description available.
3

Automatizované hodnocení typografické kvality dokumentů s využitím grafových algoritmů

Machulka, Tomáš January 2013 (has links)
No description available.
4

Evaluation of Analysis Methods used for the Assessment of I-walls Stability

Vega-Cortes, Liselle 04 February 2008 (has links)
On Monday, 29 August 2005, Hurricane Katrina struck the U.S. gulf coast. The storm caused damage to 169 miles of the 284 miles that compose the Hurricane Protection System (HPS) of the area. The system suffered 46 breaches due to water levels overtopping and another four caused by instability due to soil foundation failure. The Interagency Performance Evaluation Task Force (IPET) conducted a study to analyze what happened on the I-wall breach of the various New Orleans flood control structures and looked for solutions to improve the design of these floodwalls. The purpose of the investigation, describe in this document, is to evaluate different methods to improve the analysis model created by IPET, select the best possible analysis techniques, and apply them to a current cross-section that did not fail during Hurrican Katrina. The use of Finite Element (FE) analysis to obtain the vertical total stress distribution in the vicinity of the I-wall and to calculate pore pressures proved to be an effective enhancement. The influence of overconsolidation on the shear strength distribution of the foundation soils was examined as well. / Master of Science
5

Post-processing of optical character recognition for Swedish addresses / Efterbehandling av optisk teckenigenkänning för svenska adresser

Andersson, Moa January 2022 (has links)
​​Optical character recognition (Optical Character Recognition (OCR)) has many applications, such as digitizing historical documents, automating processes, and helping visually impaired people read. However, extracting text from images into a digital format is not an easy problem to solve, and the outputs from the OCR frameworks often include errors. The complexity comes from the many variations in (digital) fonts, handwriting, lighting, etc. To tackle this problem, this thesis investigates two different methods for correcting the errors in OCR output. The used dataset consists of Swedish addresses. The methods are therefore applied to postal automation to investigate the usage of these methods for further automating postal work by automatically reading addresses on parcels using OCR. The main method, the lexical implementation, uses a dataset of Swedish addresses so that any valid address should be in this dataset (hence there is a known and limited vocabulary), and misspelled addresses are corrected to the address in the lexicon with the smallest Levenshtein distance. The second approach is to use the same dataset, but with artificial errors, or artificial noise, added. The addresses with this artificial noise are then used together with their correct spelling to train a machine learning model based on Neural machine translation (Neural Machine Translation (NMT)) to automatically correct errors in OCR read addresses. The results from this study could contribute by defining in what direction future work connected to OCR and postal addresses should go. The results were that the lexical implementation outperformed the NMT model. However, more experiments including real data would be required to draw definitive conclusions as to how the methods would work in real-life applications. / Optisk teckenigenkänning (Optical Character Recognition (OCR)) har många användningsområden, till exempel att digitalisera historiska dokument, automatisera processer och hjälpa synskadade att läsa. Att extrahera text från bilder till ett digitalt format är dock inte ett lätt problem att lösa, och utdata från OCR-ramverken innehåller ofta fel. Komplexiteten kommer från de många variationerna i (digitala) typsnitt, handstil, belysning, etc. För att lösa problemet undersöker den här avhandling två olika metoder för att rätta fel i OCR-utdata. Det använda datasetet består av svenska adresser. Metoderna tillämpas därför på postautomatisering för att undersöka användningen av dessa metoder för att ytterligare automatisera postarbetet genom att automatiskt läsa adresser på paket med OCR. Den första metoden, den lexikaliska metoden, använder en datauppsättning av svenska adresser så att alla giltiga adresser bör finnas i denna datauppsättning (därav finns det ett känt och begränsat ordförråd). Denna datauppsättning används sedan som en ordbok för att hitta adressen med det minsta Levenshtein-avståndet till någon felstavad adress. Det andra tillvägagångssättet använder samma datauppsättning, men med artificiella fel tillagda. Adresserna med dessa artificiella fel används sedan tillsammans med deras korrekta stavning för att träna en Neural Machine Translation (NMT)-modell för att automatiskt korrigera fel i OCR-lästa adresser. Resultaten från denna studie skulle kunna bidra genom att definiera i vilken riktning framtida arbete kopplat till OCR och postadresser ska gå. Resultaten var att den lexikaliska metoden presterade bättre än NMT-modellen. Fler experiment gjorde med verklig data skulle dock behövas för att dra definitiva slutsatser om hur metoderna skulle fungera i verkliga tillämpningar.
6

Matbudgetapplikation / Food budget application

Nogén, David, Jonsson, Jennifer January 2013 (has links)
Flera nya tjänster som Mina utgifter och Smartbudget vittnar om ett tilltagande intresse bland konsumenter att planera sin ekonomi. Matvaror utgör en stor del av det genomsnittliga hushållets budget och är därmed en kostnadspost som kan göra stor skillnad i hushållets ekonomi.Detta examensarbete ska undersöka möjligheten att jämföra matvarors pris på olika affärer med hjälp av en Android-applikation och genom att fotografera texten på kvitton. Texten kommer sen processas och sorteras för att få ut nödvändig data som sen kan sparas undan i en databas. Färdiga Algoritmer och OCR-motorer har utvärderats och implementerats i applikationen direkt genom så kallade C-Bibliotek. Dessa gör det möjligt att utan större problem vidareutveckla applikationen för iOS eller Windows Phone.Projektet och Android-applikationen visar på möjligheterna att använda färdiga C-bibliotek samt telefoners kamera för att enkelt sålla ut och spara undan den informationen som är relevant för konsumentens del. / Multiple new services such as “Mina utgifter” and “Smartbudget” show that there is an increased interest among consumers to plan their economy. Groceries represent a large part of the average households budget and is thereby an important thing that can make a large difference in every households economy.This thesis will examine the possibilities to compare food prices with help of an Android application and by taking pictures of the text on receipts. The text will then be processed and sorted to get the necessary data which later can be saved into a database. Premade algorithms and OCR-engines have been evaluated and implemented directly into the Android application by using so called C-Libraries. This makes it possible without major efforts to further develop the application for IOS or Windows Phone.This project and the Android application show the possibilities to use premade libraries and the phones camera to extract and save the necessary information that is relevant for consumers.
7

Test av OCR-verktyg för Linux / OCR software tests for Linux

Nilsson, Elin January 2010 (has links)
<p>Denna rapport handlar om att ta fram ett OCR-verktyg för digitalisering av pappersdokument. Krav på detta verktyg är att bland annat det ska vara kompatibelt med Linux, det ska kunna ta kommandon via kommandoprompt och dessutom ska det kunna hantera skandinaviska tecken.</p><p>Tolv OCR-verktyg granskades, sedan valdes tre verktyg ut; Ocrad, Tesseract och OCR Shop XTR. För att testa dessa scannades två dokument in och digitaliserades i varje verktyg.</p><p>Resultatet av testerna är att Tesseract är de verktyget som är mest precist och Ocrad är det verktyget som är snabbast. OCR Shop XTR visar på sämst resultat både i tidtagning och i antal korrekta ord.</p> / <p>This report is about finding OCR software for digitizing paper documents. Requirements were to include those which were compatible with Linux, being able to run commands via the command line and also being able to handle the Scandinavian characters.</p><p>Twelve OCR softwares were reviewed, and three softwares were chosen; Ocrad, Tesseract and OCR Shop XTR. To test these, two document were scanned and digitized in each tool.</p><p>The results of the tests are that Tesseract is the tool which is the most precise and Ocrad is the tool which is the fastest. OCR Shop XTR shows the worst results both in timing and number of correct words.</p>
8

Off-line cursive handwriting recognition using recurrent neural networks

Senior, Andrew William January 1994 (has links)
No description available.
9

A robust off-line hand written character recognition system using dynamic features

Rodrigues, Antonio Jose Nunes Navarro January 1996 (has links)
No description available.
10

Estratégias para melhoria do desempenho de ferramentas comerciais de reconhecimento óptico de caracteres

Ferreira Alves, Neide 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T17:40:35Z (GMT). No. of bitstreams: 2 arquivo7036_1.pdf: 2047609 bytes, checksum: e3d87bd28e5314c857de9b11d1bc348a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Para avaliar a qualidade do desempenho de ferramentas comerciais de Reconhecimento Óptico de Caracteres (OCR) é necessário adquirir métricas para avaliar o quanto um texto transcrito está próximo do texto original, uma vez que quando uma imagem sofre alterações, por menores que sejam, estas influenciam nas transcrições dos OCR s. Neste trabalho será apresentada uma nova métrica para avaliar transcrições de OCR s: através da aplicação de técnicas de filtragem (brilho, contraste, resolução, rotação, etc.) na imagem original, para que as mudanças mínimas gerem inúmeras imagens, as quais serão submetidas ao OCR e resultarão em textos distintos. Um algoritmo foi desenvolvido para comparar os textos gerados, analisando desde a quantidade de linhas até a igualdade entre os caracteres. Através da análise de maior freqüência entre os caracteres, este algoritmo gera um novo arquivo-texto. Com o uso desta metodologia, o arquivo gerado ficou muito próximo do original com um índice de acerto maior que os arquivos transcritos sem o processo de filtragem

Page generated in 0.041 seconds