• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real time Optical  Character Recognition  in  steel  bars  using YOLOV5

Gattupalli, Monica January 2023 (has links)
Background.Identifying the quality of the products in the manufacturing industry is a challenging task. Manufacturers use needles to print unique numbers on the products to differentiate between good and bad quality products. However, identi- fying these needle printed characters can be difficult. Hence, new technologies like deep learning and optical character recognition (OCR) are used to identify these characters. Objective.The primary ob jective of this thesis is to identify the needle-printed characters on steel bars. This ob jective is divided into two sub-ob jectives. The first sub-ob jective is to identify the region of interest on the steel bars and extract it from the images. The second sub-ob jective is to identify the characters on the steel bars from the extracted images. The YOLOV5 and YOLOV5-obb ob ject detection algorithms are used to achieve these ob jectives. Method. Literature review was performed at first to select the algorithms, then the research was to collect the dataset, which was provided by OVAKO. The dataset included 1000 old images and 3000 new images of steel bars. To answer the RQ2, at first existing OCR techniques were used on the old images which had low accuracy levels. So, the YOLOV5 algorithm was used on old images to detect the region of interest. Different rotation techniques are applied to the cropped images(cropped after the bounding box is detected) no promising result is observed so YOLOV5 at the character level is used in identifying the characters, the results are unsatisfactory. To achieve this, YOLOV5-obb was used on the new images, which resulted in good accuracy levels. Results. Accuracy and mAP are used to assess the performance of OCRs and selected ob ject detection algorithms. The current study proved Existing OCR was also used in the extraction, however, it had an accuracy of 0%, which implies it failed to identify characters. With a mAP of 0.95, YOLOV5 is good at extracting cropped images but fails to identify the characters. When YOLOV5-obb is used for attaining orientation, it achieves a mAP of 0.93. Due to time constraint, the last part of the thesis was not implemented. Conclusion. The present research employed YOLOV5 and YOLOV5-obb ob ject detection algorithms to identify needle-printed characters on steel bars. By first se- lecting the region of interest and then extracting images, the study ob jectives were met. Finally, character-level identification was performed on the old images using the YOLOV5 technique and on the new images using the YOLOV5-obb algorithm, with promising results
2

Automatic compilation and summarization of documented Russian equipment losses in Ukraine : A method development / Automatisk sammanställning och sammanfattning av dokumenterade ryska materielförluster i Ukraina : Metodutveckling

Zaff, Carl January 2023 (has links)
Since the Russian invasion of Ukraine on the 24th of February 2022 – most of the United Nations have, in one way or another, participated in the most significant war of many decades. The war is characterized by Russia’s atrocious war crimes, illegal annexations, terror, propaganda, and complete disrespect for international law. On the other hand, the war has also been characterized by Ukrainian resilience, a united Europe, and a new dimension of intelligence gathering through social media.Due to the internet, social media, the accessibility of mobile devices, and Ukraine’s military and civilianeffort in documenting Russian equipment – its whereabouts, status, and quantity, Open-Source Intelligence possibilities have reached new levels for both professionals and amateurs. Despite these improved possibilities, gathering such a vast amount of data is still a Herculean effort.Hence, this study contributes a starting point for anyone wanting to compile equipment losses by providing a process specialized in automatic data extraction and summarization from an existing database. The database in question is the image collection from the military analysis group Oryxspioenkop. To further complement the information provided by Oryxspioenkop, the method automatically extracts and annotates dates from the images to provide a chronological order of the equipment loss as well as a graphical overview.The process shows promising results and manages to compile a large set of data, both the information provided by Oryx and the extracted dates from its imagery. Further, the automated process proves to be many times faster than its manual counterpart, showing a linear relationship between the number of images analysed and manhours saved. However, due to the limited development time – the process still has room for improvement and should be considered semi-automatic, rather than automatic. Nevertheless, thanks to the open-source design, the process can be continuously updated and modified to work with other databases, images, or the extraction of other strings of text from imagery.With the rise of competent artificial image generation models, the study also raises the question if this kind of imagery will be a reliable source in the future when studying equipment losses, or if artificial intelligence will be used as a tool of propaganda and psychological operations in wars to come. / Sedan Rysslands oprovocerade invasion av Ukraina den 24e februari 2022 – har stora delar av de Förenta nationerna engagerat sig i århundradets mest signifikanta krig. Kriget har karaktäriserats av ryska krigsbrott, olagliga annekteringar, terror, propaganda samt en total avsaknad av respekt för folkrätt. I kontrast, har kriget även karaktäriserats av Ukrainas ovillkorliga motståndskraft, ett enat Europa och en ny dimension av underrättelseinhämtning från sociala medier.Genom internet, sociala medier, tillgängligheten av mobiltelefoner och Ukrainas militära och civila ansträngning att dokumentera rysk materiel – vart den befinner sig, vilken status den har samt vilken kvantitet den finns i, har öppen underrättelseinhämtning blomstrat på både professionell och amatörnivå. Dock, på grund av den kvantitet som denna data genereras i, kräver en helhetssammanställning en oerhörd insats.Därav avser detta arbete ge en grund för sammanställning av materielförluster genom att tillhandahålla en automatiserad process för att extrahera data från en befintlig databas. Detta har exemplifierats genom att nyttja bildkollektioner från Oryxspioenkop, en grupp bestående av militäranalytiker som fokuserar på sammanställning av grafiskt material. Utöver detta så kompletterar processen befintliga data genom att inkludera datumet då materielen dokumenterats. Därigenom ges även en kronologisk ordning för förlusterna.Processen visar lovande resultat och lyckas att effektivt och träffsäkert sammanställa stora mängder data. Vidare lyckas processen att överträffa sin manuella motsvarighet och visar på ett linjärt samband mellan antalet analyserade bilder och besparade mantimmar. Dock, på grund av den korta utvecklingstiden har processen fortfarande en del utvecklingsmöjlighet och förblir semiautomatisk, snarare än automatisk. Å andra sidan, eftersom processen bygger på öppen källkod, finns fortsatt möjlighet att uppdatera och modifiera processen för att passa annat källmaterial.Slutligen, i och med den kontinuerliga utvecklingen av artificiell intelligens och artificiellt genererade bilder,lyfter studien frågan om denna typ av data kommer vara en trovärdig källa i framtida analyser av materielförluster, eller om det kommer att förvandlas till verktyg för propaganda och påverkansoperationeri ett framtida krig.
3

Mobile Real-Time License Plate Recognition

Liaqat, Ahmad Gull January 2011 (has links)
License plate recognition (LPR) system plays an important role in numerous applications, such as parking accounting systems, traffic law enforcement, road monitoring, expressway toll system, electronic-police system, and security systems. In recent years, there has been a lot of research in license plate recognition, and many recognition systems have been proposed and used. But these systems have been developed for computers. In this project, we developed a mobile LPR system for Android Operating System (OS). LPR involves three main components: license plate detection, character segmentation and Optical Character Recognition (OCR). For License Plate Detection and character segmentation, we used JavaCV and OpenCV libraries. And for OCR, we used tesseract-ocr. We obtained very good results by using these libraries. We also stored records of license numbers in database and for that purpose SQLite has been used.
4

Rozpoznávání textu pomocí konvolučních sítí / Optical Character Recognition Using Convolutional Networks

Csóka, Pavel January 2016 (has links)
This thesis aims at creation of new datasets for text recognition machine learning tasks and experiments with convolutional neural networks on these datasets. It describes architecture of convolutional nets, difficulties of recognizing text from photographs and contemporary works using these networks. Next, creation of annotation, using Tesseract OCR, for dataset comprised from photos of document pages, taken by mobile phones, named Mobile Page Photos. From this dataset two additional are created by cropping characters out of its photos formatted as Street View House Numbers dataset. Dataset Mobile Nice Page Photos Characters contains readable characters and Mobile Page Photos Characters adds hardly readable and unreadable ones. Three models of convolutional nets are created and used for text recognition experiments on these datasets, which are also used for estimation of annotation error.

Page generated in 0.0493 seconds