• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 34
  • 17
  • 16
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 154
  • 56
  • 44
  • 35
  • 34
  • 27
  • 26
  • 26
  • 21
  • 20
  • 19
  • 19
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analyse de documents et du comportement des utilisateurs pour améliorer l'accès à l'information / Analysis of documents and user behavior to improve information access

Jean-Caurant, Axel 08 October 2018 (has links)
L'augmentation constante du nombre de documents disponibles et des moyens d'accès transforme les pratiques de recherche d'information. Depuis quelques années, de plus en plus de plateformes de recherche d'information à destination des chercheurs ou du grand public font leur apparition sur la toile. Ce flot d'information est bien évidemment une opportunité pour les utilisateurs mais ils sont maintenant confrontés à de nouveaux problèmes. Auparavant, la principale problématique des chercheurs était de savoir si une information existait. Aujourd'hui, il est plutôt question de savoir comment accéder à une information pertinente. Pour résoudre ce problème, deux leviers d'action seront étudiés dans cette thèse. Nous pensons qu'il est avant tout important d'identifier l'usage qui est fait des principaux moyens d'accès à l'information. Être capable d'interpréter le comportement des utilisateurs est une étape nécessaire pour d'abord identifier ce que ces derniers comprennent des systèmes de recherche, et ensuite ce qui doit être approfondi. En effet, la plupart de ces systèmes agissent comme des boîtes noires qui masquent les différents processus sous-jacents. Si ces mécanismes n'ont pas besoin d'être entièrement maitrisés par les utilisateurs, ils ont cependant un impact majeur qui doit être pris en compte dans l'exploitation des résultats. Pourquoi le moteur de recherche me renvoie-t-il ces résultats ? Pourquoi ce document est-il plus pertinent qu'un autre ? Ces questions apparemment banales sont pourtant essentielles à une recherche d'information critique. Nous pensons que les utilisateurs ont le droit et le devoir de s'interroger sur la pertinence des outils informatiques mis à leur disposition. Pour les aider dans cette tâche, nous avons développé une plateforme de recherche d'information en ligne à double usage. Elle peut tout d'abord être utilisée pour l'observation et la compréhension du comportement des utilisateurs. De plus, elle peut aussi être utilisée comme support pédagogique, pour mettre en évidence les différents biais de recherche auxquels les utilisateurs sont confrontés. Dans le même temps, ces outils doivent être améliorés. Nous prenons dans cette thèse l'exemple de la qualité des documents qui a un impact certain sur leur accessibilité. La quantité de documents disponibles ne cessant d'augmenter, les opérateurs humains sont de moins en moins capables de les corriger manuellement et de s'assurer de leur qualité. Il est donc nécessaire de mettre en place de nouvelles stratégies pour améliorer le fonctionnement des systèmes de recherche. Nous proposons dans cette thèse une méthode pour automatiquement identifier et corriger certaines erreurs générées par les processus automatiques d'extraction d'information (en particulier l'OCR). / The constant increase of available documents and tools to access them has led to a change of research practices. For a few years now, more and more information retrieval platforms are made available online to the scientific community or the public. This data deluge is a great opportunity for users seeking information. However, it comes with new problems and new challenges to overcome. Formerly, the main issue for researchers was to identify if a particular resource existed. Today, the challenge is more about finding how to access pertinent information. We have identified two distinct levers to limit the impact of this new search paradigm. First, we believe that it is necessary to analyze how the different search platforms are used. To be able to understand and read into users behavior is a necessary step to comprehend what users understand, and to identify what they need to get an in-depth understanding of the operation of such platforms. Indeed, most systems act as black boxes which conceal the underlying transformations applied on data. Users do not need to understand in details how those algorithms work. However, because those algorithms have a major impact on the accessibility of information, and need to be taken into account during the exploitation of search results. Why is the search engine returning those particular results ? Why is this document more pertinent than another ? Such seemingly naive questions are nonetheless essential to undertake an analytical approach of the information search and retrieval task. We think that users have a right and a duty to question themselves about the relevance of such and such tool at their disposal. To help them cope with these issues, we developped a dual-use information search platform. On the one hand, it can be used to observe and understand user behavior. On the other hand, it can be used as a pedagogical medium to highlight research biases users can be exposed to. At the same time, we believe that the tools themselves must be improved. In the second part of this thesis, we study the impact that the quality of documents can have on their accessibility. Because of the increase of documents available online, human operators are less and less able to insure their quality. Thus, there is a need to set up new strategies to improve the way search platform operate and process documents. We propose a new method to automatically identify and correct errors generated by information extraction process such as OCR.
22

Multi Criteria Mapping Based on SVM and Clustering Methods

Diddikadi, Abhishek 09 November 2015 (has links) (PDF)
There are many more ways to automate the application process like using some commercial software’s that are used in big organizations to scan bills and forms, but this application is only for the static frames or formats. In our application, we are trying to automate the non-static frames as the study certificate we get are from different counties with different universities. Each and every university have there one format of certificates, so we try developing a very new application that can commonly work for all the frames or formats. As we observe many applicants are from same university which have a common format of the certificate, if we implement this type of tools, then we can analyze this sort of certificates in a simple way within very less time. To make this process more accurate we try implementing SVM and Clustering methods. With these methods we can accurately map courses in certificates to ASE study path if not to exclude list. A grade calculation is done for courses which are mapped to an ASE list by separating the data for both labs and courses in it. At the end, we try to award some points, which includes points from ASE related courses, work experience, specialization certificates and German language skills. Finally, these points are provided to the chair to select the applicant for master course ASE.
23

Algoritmy detekce obchodních dokumentů podle šablon / Algorithms for business document detection using templates

Michalko, Jakub January 2016 (has links)
Thesis deals with analysis and system design for automatic document recognition. The system explores the document and converts it into text data with information about the position of the word in original document. These data will then be reviewed and some of them will be assigned their importance. The way the data will be assigned is based on rules which may vary according to user needs. According to the data, their assignment and the importance of their position, the system finds a similar document and identifies the current document. Powered by TCPDF (www.tcpdf.org)
24

Extraction hybride et description structurelle de caractères pour une reconnaissance efficace de texte dans les documents hétérogènes scannés : Méthodes et Algorithmes parallèles / Hybrid extraction and structural description of characters for effective text recognition in heterogeneous scanned documents : Methods and Parallel Algorithms

Soua, Mahmoud 08 November 2016 (has links)
La Reconnaissance Optique de Caractères (OCR) est un processus qui convertit les images textuelles en documents textes éditables. De nos jours, ces systèmes sont largement utilisés dans les applications de dématérialisation tels que le tri de courriers, la gestion de factures, etc. Dans ce cadre, l'objectif de cette thèse est de proposer un système OCR qui assure un meilleur compromis entre le taux de reconnaissance et la vitesse de traitement ce qui permet de faire une dématérialisation de documents fiable et temps réel. Pour assurer sa reconnaissance, le texte est d'abord extrait à partir de l'arrière-plan. Ensuite, il est segmenté en caractères disjoints qui seront décrits ultérieurement en se basant sur leurs caractéristiques structurelles. Finalement, les caractères sont reconnus suite à la mise en correspondance de leurs descripteurs avec ceux d'une base prédéfinie. L'extraction du texte, reste difficile dans les documents hétérogènes scannés avec un arrière-plan complexe et bruité où le texte risque d'être confondu avec un fond texturé/varié en couleurs ou distordu à cause du bruit de la numérisation. D'autre part, la description des caractères, extraits et segmentés, se montre souvent complexe (calcul de transformations géométriques, utilisation d'un grand nombre de caractéristiques) ou peu discriminante si les caractéristiques des caractères choisies sont sensibles à la variation de l'échelle, de la fonte, de style, etc. Pour ceci, nous adaptons la binarisation au type de documents hétérogènes scannés. Nous assurons également une description hautement discriminante entre les caractères se basant sur l'étude de la structure des caractères selon leurs projections horizontale et verticale dans l'espace. Pour assurer un traitement temps réel, nous parallélisons les algorithmes développés sur la plateforme du processeur graphique (GPU). Nos principales contributions dans notre système OCR proposé sont comme suit :Une nouvelle méthode d'extraction de texte à partir des documents hétérogènes scannés incluant des régions de texte avec un fond complexe ou homogène. Dans cette méthode, un processus d'analyse d’image est employé suivi d’une classification des régions du document en régions d’images (texte avec un fond complexe) et de textes (texte avec un fond homogène). Pour les régions de texte on extrait l'information textuelle en utilisant une méthode de classification hybride basée sur l'algorithme Kmeans (CHK) que nous avons développé. Les régions d'images sont améliorées avec une Correction Gamma (CG) avant d'appliquer CHK. Les résultats obtenus d'expérimentations, montrent que notre méthode d'extraction de texte permet d'attendre un taux de reconnaissance de caractères de 98,5% sur des documents hétérogènes scannés.Un Descripteur de Caractère Unifié basé sur l'étude de la structure des caractères. Il emploie un nombre suffisant de caractéristiques issues de l'unification des descripteurs de la projection horizontale et verticale des caractères réalisantune discrimination plus efficace. L'avantage de ce descripteur est à la fois sa haute performance et sa simplicité en termes de calcul. Il supporte la reconnaissance des reconnaissance de caractère de 100% pour une fonte et une taille données.Une parallélisation du système de reconnaissance de caractères. Le processeur graphique GPU a été employé comme une plateforme de parallélisation. Flexible et puissante, cette architecture offre une solution efficace pour l'accélération des algorithmesde traitement intensif d'images. Notre mise en oeuvre, combine les stratégies de parallélisation à fins et gros grains pour accélérer les étapes de la chaine OCR. En outre, les coûts de communication CPU-GPU sont évités et une bonne gestion mémoire est assurée. L'efficacité de notre mise en oeuvre est validée par une expérimentation approfondie / The Optical Character Recognition (OCR) is a process that converts text images into editable text documents. Today, these systems are widely used in the dematerialization applications such as mail sorting, bill management, etc. In this context, the aim of this thesis is to propose an OCR system that provides a better compromise between recognition rate and processing speed which allows to give a reliable and a real time documents dematerialization. To ensure its recognition, the text is firstly extracted from the background. Then, it is segmented into disjoint characters that are described based on their structural characteristics. Finally, the characters are recognized when comparing their descriptors with a predefined ones.The text extraction, based on binarization methods remains difficult in heterogeneous and scanned documents with a complex and noisy background where the text may be confused with a textured background or because of the noise. On the other hand, the description of characters, and the extraction of segments, are often complex using calculation of geometricaltransformations, polygon, including a large number of characteristics or gives low discrimination if the characteristics of the selected type are sensitive to variation of scale, style, etc. For this, we adapt our algorithms to the type of heterogeneous and scanned documents. We also provide a high discriminatiobn between characters that descriptionis based on the study of the structure of the characters according to their horizontal and vertical projections. To ensure real-time processing, we parallelise algorithms developed on the graphics processor (GPU). Our main contributions in our proposed OCR system are as follows:A new binarisation method for heterogeneous and scanned documents including text regions with complex or homogeneous background. In this method, an image analysis process is used followed by a classification of the document areas into images (text with a complex background) and text (text with a homogeneous background). For text regions is performed text extraction using a hybrid method based on classification algorithm Kmeans (CHK) that we have developed for this aim. This method combines local and global approaches. It improves the quality of separation text/background, while minimizing the amount of distortion for text extraction from the scanned document and noisy because of the process of digitization. The image areas are improved with Gamma Correction (CG) before applying HBK. According to our experiment, our text extraction method gives 98% of character recognition rate on heterogeneous scanned documents.A Unified Character Descriptor based on the study of the character structure. It employs a sufficient number of characteristics resulting from the unification of the descriptors of the horizontal and vertical projection of the characters for efficient discrimination. The advantage of this descriptor is both on its high performance and its simple computation. It supports the recognition of alphanumeric and multiscale characters. The proposed descriptor provides a character recognition 100% for a given Face-type and Font-size.Parallelization of the proposed character recognition system. The GPU graphics processor has been used as a platform of parallelization. Flexible and powerful, this architecture provides an effective solution for accelerating intensive image processing algorithms. Our implementation, combines coarse/fine-grained parallelization strategies to speed up the steps of the OCR chain. In addition, the CPU-GPU communication overheads are avoided and a good memory management is assured. The effectiveness of our implementation is validated through extensive experiments
25

Improving the quality of the text, a pilot project to assess and correct the OCR in a multilingual environment

Maurer, Yves 16 October 2017 (has links)
The user expectation from a digitized collection is that a full text search can be performed and that it will retrieve all the relevant results. The reality is, however, that the errors introduced during Optical Character Recognition (OCR) degrade the results significantly and users do not get what they expect. The National Library of Luxembourg started its digitization program in 2000 and in 2005 started performing OCR on the scanned images. The OCR was always performed by the scanning suppliers, so over the years quite a lot of different OCR programs in different versions have been used. The manual parts of the digitization chain (handling, scanning, zoning, …) are difficult, costly and mostly incompressible, so the library thought that the supplier should focus on a high quality level for these parts. OCR is an automated process and so the library believed that the text recognized by the OCR could be improved automatically since OCR software improves over the years. This is why the library has never asked the supplier for a minimum recognition rate. The author is proposing to test this assumption by first evaluating the base quality of the text extracted by the original supplier, followed by running a contemporary OCR program and finally comparing its quality to the first extraction. The corpus used is the collection of digitized newspapers from Luxembourg, published from the 18th century to the 20th century. A complicating element is that the corpus consists of three main languages, German, French and Luxembourgish, which are often present on a single newspaper page together. A preliminary step is hence added to detect the language used in a block of text so that the correct dictionaries and OCR engines can be used.
26

Automatisk överföring av analog data från pappersenkäter till digital databas på Karolinska Universitetssjukhuset Huddinge / Automatic Analog Data Transfer from Paper Surveys to Digital Database at Karolinska University Hospital Huddinge

Schaedel, Karin, Söderberg, Tommy January 2020 (has links)
På Karolinska Universitetssjukhuset Huddinge har en stor mängd knäprotesoperationsenkäter staplats på hög under två år. Svaren från dessa ska konverteras till digitalt format så dessa kan lagras i databasen REDCap för att kunna utföra kvalitetskontroll och prospektiv uppföljning i flera år. För att spara arbetstid efterfrågades ett program som kunde läsa in enkätsvaren automatiskt. I detta projekt skapades ett program i MATLAB med målsättning att klara av att läsa enkätmarkeringar samt minst 70 % av personnumren. Dessa personnummer skulle sedan läggas in i ett Excel-ark och övrig svarsdata i ett separat Excel-ark på grund av sekretesslagar. Resultatet blev att programmet inte klarade av att läsa av personnummer och annan handskriven text men klarade att läsa av markerade flervalsfrågor till 90 % säkerhet i just de enkäterna som programmet var designat för. Programmet kan i nuläget användas för smidigare inläsning tillsammans med korrekturläsning. Det rekommenderas dock att fortsätta vidareutveckla programmet innan användning sker. / At Karolinska University Hospital Huddinge, many knee replacement surveys have been piled high for two years. The answers from these must be converted to digital format so that they can be stored in the REDCap database to be able to perform quality control and prospective follow-up for several years. To save working hours, a program that could read the questionnaires automatically was requested. In this project, a program was created in MATLAB with the goal of being able to read questionnaire markings and at least 70% of the social security numbers. These social security numbers were to be written on an Excel sheet and other answer data on a separate Excel sheet due to confidentiality laws. The result was that the program could not handle reading of social security numbers and other handwritten text but managed to read marked multiple-choice questions to 90% certainty in the surveys for which the program was designed. The program can currently be used for easier reading along with proofreading from staff. However, it is recommended to continue to develop the program before using it.
27

OCR-skanning i Android med Google ML Kit : En applikation för sammanställning av kvitton

van Herbert, Niklas January 2022 (has links)
Om två parter med delad ekonomi vill se över och räkna på sina inköp gjort från matvarubutiker finns två alternativ. Att spara alla fysiska kvitton för att sedan manuellt hantera uträkningen eller att använda digitala kvitton, vilket långt ifrån alla matvarubutiker erbjuder. Det finns heller inga bestämmelser mellan företagen kring vart dessa kvitton ska lagras vilket medför att en användare kan behöva logga in på flera olika platser. Användaren har alltså valet att manuellt hantera fysiska kvitton eller att manuellt hantera digitala kvitton, alternativt en blandning av båda. Oavsett kvittots form måste alla kvitton gås igenom för att se vilken person som gjort köpet, vilka eventuella varor som ska plockas bort och vad totalsumman är. Syftet med detta projekt har därför varit att skapa en applikation i Android som med hjälp av OCR-biblioteket Google ML Kit tillåter två användare att hantera sina kvitton. Rapporten undersöker de svårigheter som finns vid textigenkänning samt presenterar de tekniker och metoder som har använts under skapandet av applikationen. Applikationen utvärderades sedan genom att extrahera text från flera olika kvitton. Google’s OCR-bibliotek jämfördes också med Tesseract OCR för att undersöka om valet av ett annat OCR-bibliotek hade kunnat förbättra pålitligheten i kvittoskanningen. Slutresultatet är att applikationen fungerar väl vid korrekt inskanning, det finns dock stora svårigheter att extrahera text från kvitton som avviker från de kvittomallar som använts under implementationen. / For two parties with shared finances who wants to review and count their purchases made from grocery stores, there are two options. To save all physical receipts and then handle the calculation manually or to use digital receipts, which far from all grocery stores offer. There are also no provision between companies on where these receipts should be stored, which means that a user may have to log in at several different locations. The user thus has the choice of manually managing physical receipts or manually managing digital receipts, or in worst case a mixture of both. Regardless of the form of the receipt, all receipts must be reviewed to see which person made the purchase, which items, if any, should be removed and what the total cost is. The aim of this project has therefore been to create an application in Android using the Google ML Kit OCR library that allows two users to manage their receipts. The report examines the difficulties encountered in text recognition and presents the techniques and methods used during the creation of the application. The application was then evaluated by extracting text from several different receipts. Google’s OCR library was also compared with Tesseract OCR to investigate whether the choice of a different OCR library could have improved the reliability of receipt recognition. The final result is a application that works well when a receipt is scanned correctly, however there are significant difficulties in extracting text from receipts that differ from the receipt templates used during implementation.
28

Hur ser framtiden ut för OCR?

Lund, Mikael January 2007 (has links)
Examensarbetet handlar om OCR (Optical Character Recognition). OCR-tekniken går utpå att konvertera inskannade bilder från maskinskriven eller handskriven text (siffror, bokstäver och symboler) till datorformat.Syftet med detta examensarbete är att utforska OCRs framtid och vilka användningsområden som finns idag för tekniken. Det intressanta är att se hur OCR klarar sig när mer och mer material är digitala.Genomförandet till detta examensarbete har gjorts med information från böcker, Internet,mejl och genom att tittat närmare på ett företag inom den grafiska branschen som använder sig av OCR, nämligen Aftonbladet. Jag har även testat ett OCR-program, ABBYYsFineReader 8 och gjort tester med några testteman, exempelvis matematiktest och olikatester på artiklar från några tidningar.Mina slutsatser är att OCR har en framtid men tekniken har en del förbättringsmöjligheter,exempelvis tolkning av handskrivna texter. OCR kan finnas kvar även när mer och mermaterial blir digitala om det integreras i befintliga tekniker, som i ett spam-filter för att tolka texten i bilden. Den nuvarande OCR-tekniken fungerar bra om materialet ärmaskinskrivet och i bra skick men den måste bli bättre på att tolka handskrivna texter för att kunna användas vid arkiveringsbehov av sådana texter. / My examination subject is about OCR (Optical Character Recognition). The idea of OCRtechnology is to convert scanned images of machine-printed or handwritten text (numerals, letters and symbols) into a computer-processable format.The purpose of my examination subject is to explore the future of OCR and why to use it today. It’s interesting to see if OCR survives when more and more material is digital.The implementations to the examination subject have been made from books, Internet, e-mail and I have discovered how a company in the graphic industry are using OCR, namely Aftonbladet.I have also tested an OCR-program, ABBYYs FineReader 8, and done some testing with some testthemes, for example mathematics test and different tests on articles from a few magazines.My conclusions are that OCR has a future but the technology needs some improvements, forexample interpreting handwritten texts. OCR can exist, even when more and more material is digital, if its integrated with existing technologies, for example with a spam-filter to interpret the text within in the picture. The current OCR-technology works fine with machine-printed material, and when the document quality is good. However it needs to be on handwritten text to be used forarchiving needs.
29

Omnichannel management : The art of omnichannel orchestration / Administracion de omnicanal : El arte de la orquestación de omnicanal

Toscano, Edward, Sanchez, Nicholas January 2020 (has links)
Digital advances and consumers' buying behaviors disrupt the retail industry demanding more seamless experiences during their buys. In response, retailers are adopting an omnichannel retailing strategy (OCR), which is the integration of retailers' physical and digital channels. However, OCR is a premature concept, and there is still a lack of research in the subject, which limits the guidance for its practical application. Thus, there is still the need to understand the subject.  For OCR managers, it is necessary to understand the main challenges in order to orchestrate it better. Therefore, this research undertakes the task to study the factors that challenge OCR's orchestration, from a managerial perspective. The research departs from primary and secondary data that was later categorized according to its main factor and incorporated into an existing analytical framework of OCR. The findings indicate three main challenge groups that could hinder an orchestrator's impact on the organization. Those are particular capabilities for the OCR, the integration of channels, and technology and data leverage. / Los avances digitales y los comportamientos de compra de los consumidores perturban la industria minorista demandando experiencias más fluidas durante sus compras. En respuesta, las empresas están adoptando una estrategia de omnicanal (OCR – omnichannel retailing), que es la integración de los canales físicos y digitales de los minoristas. Sin embargo, OCR es un concepto prematuro, y todavía hay una falta de investigación en el tema, lo que limita la guía para su aplicación práctica. Por lo tanto, todavía existe la necesidad de comprender el tema.  Para los gerentes de OCR es importante comprender los principales desafíos para una mejor organización. Por lo tanto, esta investigación emprende la tarea de estudiar los factores que desafían la orquestación del omnicanal de minoristas (OCR), desde una perspectiva gerencial. La investigación parte de datos primarios y secundarios que luego se categorizaron según su factor principal y se incorporaron a un marco analítico existente de OCR. Los resultados indican tres grupos principales de desafíos que podrían dificultar el impacto de un orquestador en la organización. Esos son las capacidades particulares para el OCR, la integración de canales, y el apalancamiento de tecnología y datos.
30

Biophysical study of the DNA charge mimicry displayed by the T7 Ocr protein

Stephanou, Augoustinos S. January 2010 (has links)
The homodimeric Ocr protein of bacteriophage T7 is a molecular mimic of a bent double-stranded DNA molecule ~24 bp in length. As such, Ocr is a highly effective competitive inhibitor of the bacterial Type I restriction modification (R/M) system. Thus, Ocr facilitates phage infection of the bacterial cell to proceed unhindered by the action of the R/M defense system. The main aim of this work was to understand the basis of the DNA mimicry displayed by Ocr. The surface of the protein is replete with acidic residues, most or all of which mimic the phosphate backbone of DNA. Aspartate and glutamate residues on the surface of Ocr were either mutated or chemically modified in order to investigate their contribution to the tight binding between Ocr and the EcoKI Type I R/M enzyme. Single or double mutations of Ocr had no discernable effect on binding to EcoKI or its methyltransferase component (M.EcoKI). Chemical modification was then used to specifically modify the carboxyl moieties of Ocr, thereby neutralizing the negative charges on the protein surface. Ocr samples modified to varying degrees were analysed to establish the extent of derivatisation prior to extensive biophysical characterisation to assess the impact of these changes in terms of binding to the EcoKI R/M system. The results of this analysis revealed that the electrostatic mimicry of Ocr increases the binding affinity for its target enzyme by at least ~800-fold. In addition, based on the known 3-D structure of the protein, a set of multiple mutations were introduced into Ocr aimed at eliminating patches of negative charge from the protein surface. Specifically, between 5 and 17 acidic residues were targeted for mutation (Asp and Glu to Asn and Gln, respectively). Analysis of the in vivo activity of the mutant Ocr along with biophysical characterisation of the purified proteins was then performed. Results from these studies identified regions of the Ocr protein that were critical in forming a tight association with the EcoKI R/M system. Furthermore by comparing the relative contribution of different groups of acidic residues to the free energy of binding, the actual mechanism by which Ocr mimics the charge distribution of DNA has been delineated.

Page generated in 0.4069 seconds