Spelling suggestions: "subject:"rcr"" "subject:"cocr""
121 |
Retrofitting analogue meters with smart devices : A feasibility study of local OCR processes on an energy critical driven systemAndreasson, Joel, Ehrenbåge, Elin January 2023 (has links)
Internet of Things (IoT) are becoming increasingly popular replacements for their analogue counterparts. However, there is still demand to keep analogue equipment that is already installed, while also having automated monitoring of the equipment, such as analogue water meters. A proposed solution for this problem is to install a battery powered add-on component that can optically read meter values using Optical Character Recognition (OCR) and transmit the readings wirelessly. Two ways to do this could be to either offload the OCR process to a server, or to do the OCR processing locally on the add-on component. Since water meters are often located where reception is weak and the add-on component is battery powered, a suitable technology for data transmission could be Long Range (LoRa) because of its low-power and long-range capabilities. Since LoRa has low transfer rate there is a need to keep data transfers small in size, which could make offloading a less favorable alternative compared to local OCR processing. The purpose of this thesis is therefore to research the feasibility, in terms of energy efficiency, of doing local OCR processing on the add-on component. The feasibility condition of this study is defined as being able to continually read an analogue meter for a 10-year lifespan, while consuming under 2600 milliampere hours (mAh) of energy. The two OCR algorithms developed for this study are a specialized OCR algorithm that utilizes pattern matching principles, and a Sum of Absolute Differences (SAD) OCR algorithm. These two algorithms have been compared against each other, to determine which one is more suitable for the system. This comparison yielded that the SAD algorithm was more suitable, and was then studied further by using different image resolutions and settings to determine if it was possible to further reduce energy consumption. The results showed that it was possible to significantly reduce energy consumption by reducing the image resolution. The study also researched the possibility of reducing energy consumption further by not reading all digits on the tested water meter, depending on the measuring frequency and water flow. The study concluded that OCR processing is feasible on an energy critical driven system when reading analouge meters, depending on the measuring frequency.
|
122 |
Utveckling och utvärdering av en automatiseringslösning hos Etteplan : En lösning på ett repetitivt arbetsmoment / Development and evaluation of an automation solution at Etteplan : A solution to a repetitive taskFernström, Albin January 2023 (has links)
Allt fler företag letar efter effektiva sätt att utföra repetitiva uppgifter. Författaren kom i kontakt med företaget Etteplan, som manuellt granskar olika handlingar (dokument) för att hitta och notera eventuella felaktigheter. Uppdraget författaren fick av Etteplan bestod av att implementera en automatiseringslösning som automatiskt granskar dessa handlingar och identifierar eventuella felaktigheter i handlingens tabellhuvud, som sedan presenteras i en Excel-fil. Dessa felaktigheter kan vara att de olika fälten har fel format eller att fälten är tomma.Författarens syfte var att utveckla och utvärdera denna automatiseringslösning och ta reda på hur automatiseringslösningen mottogs av Etteplan, samt vilka förbättringsförslag som framkom. För att utveckla automatiseringslösningen har flera tekniker använts, som till exempel programmeringsspråket Python, OCR-verktyget PyTesseract och datorseende-biblioteket OpenCV. Automatiseringslösningen utvecklades och utvärderades iterativt med stöd av Design Science Research tillsammans med enhetstester, acceptanstester, ”tänka högt”-sessioner och semi-strukturerade intervjuer, där automatiseringslösningen testades av slutanvändaren på Etteplan. Under utvärderingssessionerna framkom bland annat uppfattningar om lösningens funktionalitet, utseende och layout samt förbättringsförslag. Studiens resultat indikerar att automatiseringslösningen uppfattades som positiv av slutanvändarna samt att den kan skapa mervärde på Etteplan. Ett flertal förbättringsförslag framkom, exempelvis att Excel-rapportens utformning kan förbättras. Det framkom även önskemål om ny funktionalitet, exempelvis att fler filformat skulle gå att granska, så som TIFF- och DWG-filer.
|
123 |
Bayesian Test Analytics for Document CollectionsWalker, Daniel David 15 November 2012 (has links) (PDF)
Modern document collections are too large to annotate and curate manually. As increasingly large amounts of data become available, historians, librarians and other scholars increasingly need to rely on automated systems to efficiently and accurately analyze the contents of their collections and to find new and interesting patterns therein. Modern techniques in Bayesian text analytics are becoming wide spread and have the potential to revolutionize the way that research is conducted. Much work has been done in the document modeling community towards this end,though most of it is focused on modern, relatively clean text data. We present research for improved modeling of document collections that may contain textual noise or that may include real-valued metadata associated with the documents. This class of documents includes many historical document collections. Indeed, our specific motivation for this work is to help improve the modeling of historical documents, which are often noisy and/or have historical context represented by metadata. Many historical documents are digitized by means of Optical Character Recognition(OCR) from document images of old and degraded original documents. Historical documents also often include associated metadata, such as timestamps,which can be incorporated in an analysis of their topical content. Many techniques, such as topic models, have been developed to automatically discover patterns of meaning in large collections of text. While these methods are useful, they can break down in the presence of OCR errors. We show the extent to which this performance breakdown occurs. The specific types of analyses covered in this dissertation are document clustering, feature selection, unsupervised and supervised topic modeling for documents with and without OCR errors and a new supervised topic model that uses Bayesian nonparametrics to improve the modeling of document metadata. We present results in each of these areas, with an emphasis on studying the effects of noise on the performance of the algorithms and on modeling the metadata associated with the documents. In this research we effectively: improve the state of the art in both document clustering and topic modeling; introduce a useful synthetic dataset for historical document researchers; and present analyses that empirically show how existing algorithms break down in the presence of OCR errors.
|
124 |
Historische Wetterdaten im Spannungsfeld von OCR und UCDLehenmeier, Constantin, Burghardt, Manuel 29 May 2024 (has links)
Dieser Beitrag beschreibt informatische Herausforderungen im Kontext eines Digital Humanities-Projekts zur Erschließung und Analyse historischer Wetteraufzeichnungen im Zeitraum 1774 - 1827. Bei
der Erschließung der handschriftlichen Aufzeichnungen, die Besonderheiten wie numerische Messwerte
in Tabellenstruktur und überlagernde Notizen enthalten, soll langfristig ein entsprechend trainierter
OCR-Ansatz (optical character recognition) zum Einsatz kommen. Für die Erstellung entsprechender
Trainingsdaten sowie für die manuelle Korrektur der automatisch erkannten Daten ergeben sich zunächst
softwareergonomische Herausforderungen aus Perspektive der Medieninformatik. Der Fokus dieses Beitrags liegt daher auf der Erstellung von Tools unter Berücksichtigung von Prinzipien des usability engineering und des user-centered design (UCD) für geisteswissenschaftliche Forschungsvorhaben.
|
125 |
Investigating Successful Methods for Hotel Managers to Encourage Customers to Leave More Online ReviewsHalvorsen, Ada, Hibic, Emina, Placina, Agneta January 2024 (has links)
Background: The great majority of travellers read online reviews before selecting a hotel. Showcasing how big of a role online customer reviews (OCR) play in the consumer decision-making process. Apart from potential financial gains, reviews also help to indicate the areas that performed excellently and those that still need to be improved. However, only a part of hotel visitors actually leave an online review after a hotel stay indicating that there is still room for increasing the amount of OCR left to boost hotel performance and drive sales. Purpose: The purpose of this research is to explore how hotel managers work with OCR and provide recommendations on how they can incorporate it into their business successfully. Method: This study follows a qualitative research approach by conducting semi-structured interviews with five hotel managers and a CEO from a ratings and review agency. Conclusion: The study found that although all hotels use OCR to some extent, chain hotels often apply more advanced strategies. This study identified several strategies that could be adopted to increase the amount of online reviews. For instance, by developing omnichannel to make the customer experience seamless, offer them easy feedback tools, and the possibility to give short reviews. This is in addition to offering the option for guests to choose if they want to share longer and deeper feedback afterwards. Also, it is important to research when it is most convenient for hotel guests to leave a review. This would be either straight after the hotel stay or a few days later. Overall, it is important to develop good customer relationship management to improve customer satisfaction, but at the same time have a well-developed service failure system in case something negative is indicated in the review.
|
126 |
Arabic Text Recognition and Machine TranslationAlkhoury, Ihab 13 July 2015 (has links)
[EN] Research on Arabic Handwritten Text Recognition (HTR) and Arabic-English Machine Translation (MT) has been usually approached as two independent areas of study. However, the idea of creating one system that combines both areas together, in order to generate English translation out of images containing Arabic text, is still a very challenging task. This process can be interpreted as the translation of Arabic images. In this thesis, we propose a system that recognizes Arabic handwritten text images, and translates the recognized text into English. This system is built from the combination of an HTR system and an MT system.
Regarding the HTR system, our work focuses on the use of Bernoulli Hidden Markov Models (BHMMs). BHMMs had proven to work very well with Latin script. Indeed, empirical results based on it were reported on well-known corpora, such as IAM and RIMES. In this thesis, these results are extended to Arabic script, in particular, to the well-known IfN/ENIT and NIST OpenHaRT databases for Arabic handwritten text.
The need for transcribing Arabic text is not only limited to handwritten text, but also to printed text. Arabic printed text might be considered as a simple form of handwritten text version. Thus, for this kind of text, we also propose Bernoulli HMMs. In addition, we propose to compare BHMMs with state-of-the-art technology based on neural networks.
A key idea that has proven to be very effective in this application of Bernoulli HMMs is the use of a sliding window of adequate width for feature extraction. This idea has allowed us to obtain very competitive results in the recognition of both Arabic handwriting and printed text. Indeed, a system based on it ranked first at the ICDAR 2011 Arabic recognition competition on the Arabic Printed Text Image (APTI) database. Moreover, this idea has been refined by using repositioning techniques for extracted windows, leading to further improvements in Arabic text recognition.
In the case of handwritten text, this refinement improved our system which ranked first at the ICFHR 2010 Arabic handwriting recognition competition on IfN/ENIT. In the case of printed text, this refinement led to an improved system which ranked second at the ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text on APTI. Furthermore, this refinement was used with neural networks-based technology, which led to state-of-the-art results.
For machine translation, the system was based on the combination of three state-of-the-art statistical models: the standard phrase-based models, the hierarchical phrase-based models, and the N-gram phrase-based models. This combination was done using the Recognizer Output Voting Error Reduction (ROVER) method. Finally, we propose three methods of combining HTR and MT to develop an Arabic image translation system. The system was evaluated on the NIST OpenHaRT database, where competitive results were obtained. / [ES] El reconocimiento de texto manuscrito (HTR) en árabe y la traducción automática (MT) del árabe al inglés se han tratado habitualmente como dos áreas de estudio independientes. De hecho, la idea de crear un sistema que combine las dos áreas, que directamente genere texto en inglés a partir de imágenes que contienen texto en árabe, sigue siendo una tarea difícil. Este proceso se puede interpretar como la traducción de imágenes de texto en árabe. En esta tesis, se propone un sistema que reconoce las imágenes de texto manuscrito en árabe, y que traduce el texto reconocido al inglés. Este sistema está construido a partir de la combinación de un sistema HTR y un sistema MT.
En cuanto al sistema HTR, nuestro trabajo se enfoca en el uso de los Bernoulli Hidden Markov Models (BHMMs). Los modelos BHMMs ya han sido probados anteriormente en tareas con alfabeto latino obteniendo buenos resultados. De hecho, existen resultados empíricos publicados usando corpus conocidos, tales como IAM o RIMES. En esta tesis, estos resultados se han extendido al texto manuscrito en árabe, en particular, a las bases de datos IfN/ENIT y NIST OpenHaRT.
En aplicaciones reales, la transcripción del texto en árabe no se limita únicamente al texto manuscrito, sino también al texto impreso. El texto impreso se puede interpretar como una forma simplificada de texto manuscrito. Por lo tanto, para este tipo de texto, también proponemos el uso de modelos BHMMs. Además, estos modelos se han comparado con tecnología del estado del arte basada en redes neuronales.
Una idea clave que ha demostrado ser muy eficaz en la aplicación de modelos BHMMs es el uso de una ventana deslizante (sliding window) de anchura adecuada durante la extracción de características. Esta idea ha permitido obtener resultados muy competitivos tanto en el reconocimiento de texto manuscrito en árabe como en el de texto impreso. De hecho, un sistema basado en este tipo de extracción de características quedó en la primera posición en el concurso ICDAR 2011 Arabic recognition competition usando la base de datos Arabic Printed Text Image (APTI). Además, esta idea se ha perfeccionado mediante el uso de técnicas de reposicionamiento aplicadas a las ventanas extraídas, dando lugar a nuevas mejoras en el reconocimiento de texto árabe.
En el caso de texto manuscrito, este refinamiento ha conseguido mejorar el sistema que ocupó el primer lugar en el concurso ICFHR 2010 Arabic handwriting recognition competition usando IfN/ENIT. En el caso del texto impreso, este refinamiento condujo a un sistema mejor que ocupó el segundo lugar en el concurso ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text en el que se usaba APTI. Por otro lado, esta técnica se ha evaluado también en tecnología basada en redes neuronales, lo que ha llevado a resultados del estado del arte.
Respecto a la traducción automática, el sistema se ha basado en la combinación de tres tipos de modelos estadísticos del estado del arte: los modelos standard phrase-based, los modelos hierarchical phrase-based y los modelos N-gram phrase-based. Esta combinación se hizo utilizando el método Recognizer Output Voting Error Reduction (ROVER). Por último, se han propuesto tres métodos para combinar los sistemas HTR y MT con el fin de desarrollar un sistema de traducción de imágenes de texto árabe a inglés. El sistema se ha evaluado sobre la base de datos NIST OpenHaRT, donde se han obtenido resultados competitivos. / [CA] El reconeixement de text manuscrit (HTR) en àrab i la traducció automàtica (MT) de l'àrab a l'anglès s'han tractat habitualment com dues àrees d'estudi independents. De fet, la idea de crear un sistema que combine les dues àrees, que directament genere text en anglès a partir d'imatges que contenen text en àrab, continua sent una tasca difícil. Aquest procés es pot interpretar com la traducció d'imatges de text en àrab. En aquesta tesi, es proposa un sistema que reconeix les imatges de text manuscrit en àrab, i que tradueix el text reconegut a l'anglès. Aquest sistema està construït a partir de la combinació d'un sistema HTR i d'un sistema MT.
Pel que fa al sistema HTR, el nostre treball s'enfoca en l'ús dels Bernoulli Hidden Markov Models (BHMMs). Els models BHMMs ja han estat provats anteriorment en tasques amb alfabet llatí obtenint bons resultats. De fet, existeixen resultats empírics publicats emprant corpus coneguts, tals com IAM o RIMES. En aquesta tesi, aquests resultats s'han estès a la escriptura manuscrita en àrab, en particular, a les bases de dades IfN/ENIT i NIST OpenHaRT.
En aplicacions reals, la transcripció de text en àrab no es limita únicament al text manuscrit, sinó també al text imprès. El text imprès es pot interpretar com una forma simplificada de text manuscrit. Per tant, per a aquest tipus de text, també proposem l'ús de models BHMMs. A més a més, aquests models s'han comparat amb tecnologia de l'estat de l'art basada en xarxes neuronals.
Una idea clau que ha demostrat ser molt eficaç en l'aplicació de models BHMMs és l'ús d'una finestra lliscant (sliding window) d'amplària adequada durant l'extracció de característiques. Aquesta idea ha permès obtenir resultats molt competitius tant en el reconeixement de text àrab manuscrit com en el de text imprès. De fet, un sistema basat en aquest tipus d'extracció de característiques va quedar en primera posició en el concurs ICDAR 2011 Arabic recognition competition emprant la base de dades Arabic Printed Text Image (APTI).
A més a més, aquesta idea s'ha perfeccionat mitjançant l'ús de tècniques de reposicionament aplicades a les finestres extretes, donant lloc a noves millores en el reconeixement de text en àrab. En el cas de text manuscrit, aquest refinament ha aconseguit millorar el sistema que va ocupar el primer lloc en el concurs ICFHR 2010 Arabic handwriting recognition competition usant IfN/ENIT. En el cas del text imprès, aquest refinament va conduir a un sistema millor que va ocupar el segon lloc en el concurs ICDAR 2013 Competition on Multi-font and Multi-size Digitally Represented Arabic Text en el qual s'usava APTI. D'altra banda, aquesta tècnica s'ha avaluat també en tecnologia basada en xarxes neuronals, el que ha portat a resultats de l'estat de l'art.
Respecte a la traducció automàtica, el sistema s'ha basat en la combinació de tres tipus de models estadístics de l'estat de l'art: els models standard phrase-based, els models hierarchical phrase-based i els models N-gram phrase-based. Aquesta combinació es va fer utilitzant el mètode Recognizer Output Voting Errada Reduction (ROVER). Finalment, s'han proposat tres mètodes per combinar els sistemes HTR i MT amb la finalitat de desenvolupar un sistema de traducció d'imatges de text àrab a anglès. El sistema s'ha avaluat sobre la base de dades NIST OpenHaRT, on s'han obtingut resultats competitius. / Alkhoury, I. (2015). Arabic Text Recognition and Machine Translation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53029
|
127 |
Arabic text recognition of printed manuscripts : efficient recognition of off-line printed Arabic text using Hidden Markov Models, Bigram Statistical Language Model, and post-processingAl-Muhtaseb, Husni Abdulghani January 2010 (has links)
Arabic text recognition was not researched as thoroughly as other natural languages. The need for automatic Arabic text recognition is clear. In addition to the traditional applications like postal address reading, check verification in banks, and office automation, there is a large interest in searching scanned documents that are available on the internet and for searching handwritten manuscripts. Other possible applications are building digital libraries, recognizing text on digitized maps, recognizing vehicle license plates, using it as first phase in text readers for visually impaired people and understanding filled forms. This research work aims to contribute to the current research in the field of optical character recognition (OCR) of printed Arabic text by developing novel techniques and schemes to advance the performance of the state of the art Arabic OCR systems. Statistical and analytical analysis for Arabic Text was carried out to estimate the probabilities of occurrences of Arabic character for use with Hidden Markov models (HMM) and other techniques. Since there is no publicly available dataset for printed Arabic text for recognition purposes it was decided to create one. In addition, a minimal Arabic script is proposed. The proposed script contains all basic shapes of Arabic letters. The script provides efficient representation for Arabic text in terms of effort and time. Based on the success of using HMM for speech and text recognition, the use of HMM for the automatic recognition of Arabic text was investigated. The HMM technique adapts to noise and font variations and does not require word or character segmentation of Arabic line images. In the feature extraction phase, experiments were conducted with a number of different features to investigate their suitability for HMM. Finally, a novel set of features, which resulted in high recognition rates for different fonts, was selected. The developed techniques do not need word or character segmentation before the classification phase as segmentation is a byproduct of recognition. This seems to be the most advantageous feature of using HMM for Arabic text as segmentation tends to produce errors which are usually propagated to the classification phase. Eight different Arabic fonts were used in the classification phase. The recognition rates were in the range from 98% to 99.9% depending on the used fonts. As far as we know, these are new results in their context. Moreover, the proposed technique could be used for other languages. A proof-of-concept experiment was conducted on English characters with a recognition rate of 98.9% using the same HMM setup. The same techniques where conducted on Bangla characters with a recognition rate above 95%. Moreover, the recognition of printed Arabic text with multi-fonts was also conducted using the same technique. Fonts were categorized into different groups. New high recognition results were achieved. To enhance the recognition rate further, a post-processing module was developed to correct the OCR output through character level post-processing and word level post-processing. The use of this module increased the accuracy of the recognition rate by more than 1%.
|
128 |
A Book Reader Design for Persons with Visual Impairment and BlindnessGalarza, Luis E. 16 November 2017 (has links)
The objective of this dissertation is to provide a new design approach to a fully automated book reader for individuals with visual impairment and blindness that is portable and cost effective. This approach relies on the geometry of the design setup and provides the mathematical foundation for integrating, in a unique way, a 3-D space surface map from a low-resolution time of flight (ToF) device with a high-resolution image as means to enhance the reading accuracy of warped images due to the page curvature of bound books and other magazines. The merits of this low cost, but effective automated book reader design include: (1) a seamless registration process of the two imaging modalities so that the low resolution (160 x 120 pixels) height map, acquired by an Argos3D-P100 camera, accurately covers the entire book spread as captured by the high resolution image (3072 x 2304 pixels) of a Canon G6 Camera; (2) a mathematical framework for overcoming the difficulties associated with the curvature of open bound books, a process referred to as the dewarping of the book spread images, and (3) image correction performance comparison between uniform and full height map to determine which map provides the highest Optical Character Recognition (OCR) reading accuracy possible. The design concept could also be applied to address the challenging process of book digitization. This method is dependent on the geometry of the book reader setup for acquiring a 3-D map that yields high reading accuracy once appropriately fused with the high-resolution image. The experiments were performed on a dataset consisting of 200 pages with their corresponding computed and co-registered height maps, which are made available to the research community (cate-book3dmaps.fiu.edu). Improvements to the characters reading accuracy, due to the correction steps, were quantified and measured by introducing the corrected images to an OCR engine and tabulating the number of miss-recognized characters. Furthermore, the resilience of the book reader was tested by introducing a rotational misalignment to the book spreads and comparing the OCR accuracy to those obtained with the standard alignment. The standard alignment yielded an average reading accuracy of 95.55% with the uniform height map (i.e., the height values of the central row of the 3-D map are replicated to approximate all other rows), and 96.11% with the full height maps (i.e., each row has its own height values as obtained from the 3D camera). When the rotational misalignments were taken into account, the results obtained produced average accuracies of 90.63% and 94.75% for the same respective height maps, proving added resilience of the full height map method to potential misalignments.
|
129 |
Localization And Recognition Of Text In Digital MediaSaracoglu, Ahmet 01 November 2007 (has links) (PDF)
Textual information within digital media can be used in many areas such as, indexing and structuring of media databases, in the aid of visually impaired, translation of foreign signs and many more. This said, mainly text can be separated into two categories in digital media as, overlay-text and scene-text. In this thesis localization and recognition of video text regardless of its category in digital media is investigated. As a necessary first step, framework of a complete system is discussed. Next, a comparative analysis of feature vector and classification method pairs is presented. Furthermore, multi-part nature of text is exploited by proposing a novel Markov Random Field approach for the classification of text/non-text regions. Additionally, better localization of text is achieved by introducing bounding-box extraction method. And for the recognition of text regions, a handprint based Optical Character Recognition system is thoroughly investigated. During the investigation of text recognition, multi-hypothesis approach for the segmentation of background is proposed by incorporating k-Means clustering. Furthermore, a novel dictionary-based ranking mechanism is proposed for recognition spelling correction. And overall system is simulated on a challenging data set. Also, a through survey on scene-text localization and recognition is presented. Furthermore, challenges are identified and discussed by providing related work on them. Scene-text localization simulations on a public competition data set are also provided. Lastly, in order to improve recognition performance of scene-text on signs that are affected from perspective projection distortion, a rectification method is proposed and simulated.
|
130 |
[en] OPTICAL CHARACTER RECOGNITION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEMS / [pt] IDENTIFICAÇÃO DE CARACTERES PARA RECONHECIMENTO AUTOMÁTICO DE PLACAS VEICULARESEDUARDO PIMENTEL DE ALVARENGA 13 January 2017 (has links)
[pt] Sistemas de reconhecimento automático de placas (ALPR na sigla em inglês) são geralmente utilizados em aplicações como controle de tráfego, estacionamento, monitoração de faixas exclusivas entre outras aplicações. A estrutura básica de um sistema ALPR pode ser dividida em quatro etapas principais: aquisição da imagem, localização da placa em uma foto ou frame de vídeo; segmentação dos caracteres que compõe a placa; e reconhecimento destes caracteres. Neste trabalho focamos somente na etapa de reconhecimento. Para esta tarefa, utilizamos um Perceptron multiclasse, aprimorado pela técnica de geração de atributos baseada em entropia. Mostramos que é possível atingir resultados comparáveis com o estado da arte, com uma arquitetura leve e que permite aprendizado contínuo mesmo em equipamentos com baixo poder de processamento, tais como dispositivos móveis. / [en] ALPR systems are commonly used in applications such as traffic control, parking ticketing, exclusive lane monitoring and others. The basic structure of an ALPR system can be divided in four major steps: image acquisition, license plate localization in a picture or movie frame; character segmentation; and character recognition. In this work we ll focus solely on the recognition step. For this task, we used a multiclass Perceptron, enhanced by an entropy guided feature generation technique. We ll show that it s possible to achieve results on par with the state of the art solution, with a lightweight architecture that allows continuous learning, even on low processing power machines, such as mobile devices.
|
Page generated in 0.0285 seconds