• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 34
  • 18
  • 17
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 164
  • 60
  • 47
  • 38
  • 34
  • 29
  • 28
  • 28
  • 22
  • 22
  • 21
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Kontrola zobrazení textu ve formulářích / Quality Check of Text in Forms

Moravec, Zbyněk January 2017 (has links)
Purpose of this thesis is the quality check of correct button text display on photographed monitors. These photographs contain a variety of image distortions which complicates the following image graphic element recognition. This paper outlines several possibilities to detect buttons on forms and further elaborates on the implemented detection based on contour shapes description. After buttons are found, their defects are detected subsequently. Additionally, this thesis describes an automatic identification of picture with the highest quality for documentation purposes.
152

Rozpoznávání historických textů pomocí hlubokých neuronových sítí / Convolutional Networks for Historic Text Recognition

Kišš, Martin January 2018 (has links)
The aim of this work is to create a tool for automatic transcription of historical documents. The work is mainly focused on the recognition of texts from the period of modern times written using font Fraktur. The problem is solved with a newly designed recurrent convolutional neural networks and a Spatial Transformer Network. Part of the solution is also an implemented generator of artificial historical texts. Using this generator, an artificial data set is created on which the convolutional neural network for line recognition is trained. This network is then tested on real historical lines of text on which the network achieves up to 89.0 % of character accuracy. The contribution of this work is primarily the newly designed neural network for text line recognition and the implemented artificial text generator, with which it is possible to train the neural network to recognize real historical lines of text.
153

Využití hlubokého učení pro rozpoznání textu v obrazu grafického uživatelského rozhraní / Deep Learning for OCR in GUI

Hamerník, Pavel January 2019 (has links)
Optical character recognition (OCR) has been a topic of interest for many years. It is defined as the process of digitizing a document image into a sequence of characters. Despite decades of intense research, OCR systems with capabilities to that of human still remains an open challenge. In this work there is presented a design and implementation of such system, which is capable of detecting texts in graphical user interfaces.
154

Využití hlubokého učení pro rozpoznání textu v obrazu grafického uživatelského rozhraní / Deep Learning for OCR in GUI

Hamerník, Pavel January 2019 (has links)
Optical character recognition (OCR) has been a topic of interest for many years. It is defined as the process of digitizing a document image into a sequence of characters. Despite decades of intense research, OCR systems with capabilities to that of human still remains an open challenge. In this work there is presented a design and implementation of such system, which is capable of detecting texts in graphical user interfaces.
155

Active Learning pro zpracování archivních pramenů / Active Learning for Processing of Archive Sources

Hříbek, David January 2021 (has links)
This work deals with the creation of a system that allows uploading and annotating scans of historical documents and subsequent active learning of models for character recognition (OCR) on available annotations (marked lines and their transcripts). The work describes the process, classifies the techniques and presents an existing system for character recognition. Above all, emphasis is placed on machine learning methods. Furthermore, the methods of active learning are explained and a method of active learning of available OCR models from annotated scans is proposed. The rest of the work deals with a system design, implementation, available datasets, evaluation of self-created OCR model and testing of the entire system.
156

Digitalní knihovna Kramerius a její využívání studenty historických oborů / Digital library Kramerius and its use by students of historical sciences

Fišer, Marek January 2012 (has links)
The thesis focuses on the digital library Kramerius, that provides electronic access to mainly Bohemian documents from 19th and 20th century. Its aim is not to describe all parts of this system and its hinterland - the thesis focuses rather on the user point of view. The core of the thesis is formed by user survey, which is conducted among the students of historical disciplines at the Faculty of Arts, Charles University in Prague, namely the students enrolled in the study programme Historical sciences. In the first part, there is a characteristic of digital library Kramerius with the focus on the aspects, that are closely related with its usability (user interface, accessibility of documents, conversion of the original documents to machine- readable form). The user survey, which is described in the second part, is devided into two parts.
157

Scalable Detection and Extraction of Data in Lists in OCRed Text for Ontology Population Using Semi-Supervised and Unsupervised Active Wrapper Induction

Packer, Thomas L 01 October 2014 (has links) (PDF)
Lists of records in machine-printed documents contain much useful information. As one example, the thousands of family history books scanned, OCRed, and placed on-line by FamilySearch.org probably contain hundreds of millions of fact assertions about people, places, family relationships, and life events. Data like this cannot be fully utilized until a person or process locates the data in the document text, extracts it, and structures it with respect to an ontology or database schema. Yet, in the family history industry and other industries, data in lists goes largely unused because no known approach adequately addresses all of the costs, challenges, and requirements of a complete end-to-end solution to this task. The diverse information is costly to extract because many kinds of lists appear even within a single document, differing from each other in both structure and content. The lists' records and component data fields are usually not set apart explicitly from the rest of the text, especially in a corpus of OCRed historical documents. OCR errors and the lack of document structure (e.g. HMTL tags) make list content hard to recognize by a software tool developed without a substantial amount of highly specialized, hand-coded knowledge or machine learning supervision. Making an approach that is not only accurate but also sufficiently scalable in terms of time and space complexity to process a large corpus efficiently is especially challenging. In this dissertation, we introduce a novel family of scalable approaches to list discovery and ontology population. Its contributions include the following. We introduce the first general-purpose methods of which we are aware for both list detection and wrapper induction for lists in OCRed or other plain text. We formally outline a mapping between in-line labeled text and populated ontologies, effectively reducing the ontology population problem to a sequence labeling problem, opening the door to applying sequence labelers and other common text tools to the goal of populating a richly structured ontology from text. We provide a novel admissible heuristic for inducing regular expression wrappers using an A* search. We introduce two ways of modeling list-structured text with a hidden Markov model. We present two query strategies for active learning in a list-wrapper induction setting. Our primary contributions are two complete and scalable wrapper-induction-based solutions to the end-to-end challenge of finding lists, extracting data, and populating an ontology. The first has linear time and space complexity and extracts highly accurate information at a low cost in terms of user involvement. The second has time and space complexity that are linear in the size of the input text and quadratic in the length of an output record and achieves higher F1-measures for extracted information as a function of supervision cost. We measure the performance of each of these approaches and show that they perform better than strong baselines, including variations of our own approaches and a conditional random field-based approach.
158

Effektivisering av Tillverkningsprocesser med Artificiell Intelligens : Minskad Materialförbrukning och Förbättrad Kvalitetskontroll

Al-Saaid, Kasim, Holm, Daniel January 2024 (has links)
This report explores the implementation of AI techniques in the manufacturing process at Ovako, focusing on process optimization, individual traceability, and quality control. By integrating advanced AI models and techniques at various levels within the production process, Ovako can improve efficiency, reduce material consumption, and prevent production stops. For example, predictive maintenance can be applied to anticipate and prevent machine problems, while image recognition algorithms and optical character recognition enable individual traceability of each rod throughout the process. Furthermore, AI-based quality control can detect defects and deviations with high precision and speed, leading to reduced risk of faulty products and increased product quality. By carefully considering the role of the workforce, safety and ethical issues, and the benefits and challenges of AI implementation, Ovako can maximize the benefits of these techniques and enhance its competitiveness in the market. / Denna rapport utforskar implementeringen av AI-tekniker i tillverkningsprocessen hos Ovako, med fokus på processoptimering, individuell spårbarhet och kvalitetskontroll. Genom att integrera avancerade AI-modeller och tekniker på olika nivåer inom produktionsprocessen kan Ovako förbättra effektiviteten, minska materialförbrukningen och förhindra produktionsstopp. Exempelvis kan prediktivt underhåll tillämpas för att förutse och förebygga maskinproblem, medan bildigenkänningsalgoritmer och optisk teckenigenkänning möjliggör individuell spårbarhet av varje stång genom processen. Dessutom kan AI-baserad kvalitetskontroll detektera defekter och avvikelser med hög precision och hastighet, vilket leder till minskad risk för felaktiga produkter och ökad produktkvalitet. Genom att noggrant överväga arbetskraftens roll, säkerhets- och etikfrågor samt fördelarna och utmaningarna med AI-implementeringen kan Ovako maximera nyttan av dessa tekniker och förbättra sin konkurrenskraft på marknaden.
159

Localization and quality enhancement for automatic recognition of vehicle license plates in video sequences / Localisation et amélioration de qualité pour reconnaissance automatique de plaques d'immatriculation de véhicules dans les séquences vidéo.

Nguyen, Chu Duc 29 June 2011 (has links)
La lecture automatique de plaques d’immatriculation de véhicule est considérée comme une approche de surveillance de masse. Elle permet, grâce à la détection /localisation ainsi que la reconnaissance optique, d’identifier un véhicule dans les images ou les séquences d’images. De nombreuses applications comme le suivi du trafic, la détection de véhicules volés, le télépéage ou la gestion d’entrée / sortie des parkings utilise ce procédé. Or malgré d’important progrès enregistré depuis l’apparition des premiers prototypes en 1979 accompagné d’un taux de reconnaissance parfois impressionnant, notamment grâce aux avancés en recherche scientifique et en technologie des capteurs, les contraintes imposés pour le bon fonctionnement de tels systèmes en limitent les portées. En effet, l’utilisation optimale des techniques de localisation et de reconnaissance de plaque d’immatriculation dans les scénarii opérationnels nécessite des conditions d’éclairage contrôlées ainsi qu’une limitation dans de la pose, de vitesse ou tout simplement de type de plaque. La lecture automatique de plaques d’immatriculation reste alors un problème de recherche ouvert. La contribution majeure de cette thèse est triple. D’abord une nouvelle approche robuste de localisation de plaque d’immatriculation dans des images ou des séquences d’images est proposée. Puis, l’amélioration de la qualité des plaques localisées est traitée par une adaptation de technique de super-résolution. Finalement, un modèle unifié de localisation et de super-résolution est proposé permettant de diminuer la complexité temporelle des deux approches combinées. / Automatic reading of vehicle license plates is considered an approach to mass surveillance. It allows, through the detection / localization and optical recognition to identify a vehicle in the images or video sequences. Many applications such as traffic monitoring, detection of stolen vehicles, the toll or the management of entrance/ exit parking uses this method. Yet in spite of important progress made since the appearance of the first prototype sin 1979, with a recognition rate sometimes impressive thanks to advanced science and sensor technology, the constraints imposed for the operation of such systems limit laid. Indeed, the optimal use of techniques for localizing and recognizing license plates in operational scenarios requiring controlled lighting conditions and a limitation of the pose, velocity, or simply type plate. Automatic reading of vehicle license plates then remains an open research problem. The major contribution of this thesis is threefold. First, a new approach to robust license plate localization in images or image sequences is proposed. Then, improving the quality of the plates is treated with a localized adaptation of super-resolution technique. Finally, a unified model of location and super-resolution is proposed to reduce the time complexity of both approaches combined.
160

Os impactos do uso de tecnologia da informação e da identificação e captura automática de dados nos processos operacionais do varejo

Romano, Regiane Relva 09 December 2011 (has links)
Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2011-12-28T12:03:36Z No. of bitstreams: 1 Tese Regiane Relva Romano - dezembro 2011-Versao Final.pdf: 4192254 bytes, checksum: 786a11620fac456f482835d77b815ce8 (MD5) / Rejected by Gisele Isaura Hannickel (gisele.hannickel@fgv.br), reason: Prezada Regiane, Está pendente a capa e a ficha catalográfica. Favor retirar o logotipo das primeiras folhas. Segue a sequencia: 1º capa 2º contra capa (que na sua postagem está como 1ª folha) 3º ficha catalográfica 4º folha de assinaturas 5º sequencia do trabalho..... Em caso de dúvidas favor verificar no site da biblioteca / serviços / manuais / normalização de trabalhos academicos. Att, Secretaria de Registro on 2011-12-28T12:09:43Z (GMT) / Submitted by Regiane Relva Romano (regiane@vip-systems.com.br) on 2012-01-04T01:31:23Z No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5) / Approved for entry into archive by Gisele Isaura Hannickel (gisele.hannickel@fgv.br) on 2012-01-04T11:12:52Z (GMT) No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5) / Made available in DSpace on 2012-01-04T11:16:03Z (GMT). No. of bitstreams: 1 Tese Regiane Relva Romano - Versao Final Entregue.pdf: 4264628 bytes, checksum: 5cde922f780d99b751abbf306d16f982 (MD5) Previous issue date: 2011-12-09 / Este trabalho objetivou identificar as principais tecnologias disponíveis de TI (Tecnologia da Informação) e de AIDC (Identificação e Captura Automática de Dados) para a área de varejo de autosserviço, para preencher a lacuna existente na literatura, sobre os benefícios de se usar novas tecnologias no ponto de venda, com vistas a otimizar sua operação. Para tanto, foram estudados os principais processos operacionais de uma loja de varejo de autosserviço, com vistas a identificar como as Tecnologias da Informação (TI) e de Identificação e Captura Automática de Dados (AIDC), poderiam ajudar a melhorar os resultados operacionais e agregar valor ao negócio. Para analisar suas proposições (de que o uso de TI e de AIDC podem ajudar na: redução dos tempos dos processos de retaguarda; redução do número de operações no ponto de venda; prevenção de perdas; redução dos custos e dos tempos para a realização dos inventários; redução do número de funcionários nas lojas; redução do tempo de fila no caixa; redução de rupturas e no aumento da eficiência operacional da loja), foram pesquisados diversos estudos de casos mundiais de empresas do segmento de varejo, que implementaram as tecnologias de AIDC e TI, principalmente a de RFID, para saber quais foram os impactos do uso destas tecnologias em suas operações e, em seguida, foi desenvolvido um Estudo de Caso abrangente, por meio do qual se objetivou entender os benefícios empresariais reais do uso destas tecnologias para o varejo de autosserviço. Como resultado final, foi possível identificar as mudanças nos processos operacionais do varejo de autosserviço, bem como os benefícios gerados em termos de custo, produtividade, qualidade, flexibilidade e inovação. O trabalho também evidenciou os pontos críticos de sucesso para a implementação da TI e das AIDC no varejo, que são: a revisão dos processos operacionais; a correta definição do hardware; dos insumos; do software; das interferências do ambiente físico; da disponibilização dos dados/informações dos produtos; das pessoas/funcionários e dos parceiros de negócios/fornecedores. De maneira mais específica, este trabalho buscou contribuir para o enriquecimento do campo de estudos no segmento de varejo e para o uso da tecnologia da informação, no Brasil, já que o assunto sobre o uso e o impacto de novas tecnologias no ponto de vendas, ainda permanece pouco explorado academicamente. / This study sought to identify the main IT technologies available for the AIDC and retail self-service area, to fill the gap in the literature about the real advantages of using new technologies at the point of sale, in order to optimize its operation. In order to do this, we studied the main operational processes of a self-service retail store bearing in mind to identify how the technologies of Automatic Identification and Data Capture and IT could help to improve the operating results and add value to the business. To analyze these proposals we have surveyed several global case studies of retail companies, which implemented the AIDC and IT technologies to investigate what were the impacts of using these technologies in their operations and then designed a comprehensive and innovative Case Study, through which we sought to understand the real business benefits. As a final result, it was possible to identify the changes and the benefits in terms of cost, productivity, quality, flexibility and innovation. The work also highlighted the critical points of success for the implementation of AIDC and IT Retail, which are: the review of operating processes, the correct definition of the hardware; inputs; software; interferences of the physical environment, the availability of data / information of products, of people / employees and of business partners / suppliers. More specifically, this study sought to contribute to the enrichment of the field studies in the retail segment and for the use of information technology in Brazil, since the issue on the use and impact of new technologies at the point of sales, still remains unexplored academically.

Page generated in 0.0257 seconds