• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Contributions to Document Image Analysis: Application to Music Score Images

Castellanos, Francisco J. 25 November 2022 (has links)
Esta tesis contribuye en el límite del conocimiento en algunos procesos relevantes dentro del flujo de trabajo típico asociado a los sistemas de reconocimiento óptico de música (OMR). El análisis de los documentos es una etapa clave y temprana dentro de dicho flujo, cuyo objetivo es proporcionar una versión simplificada de la información entrante; es decir, de las imágenes de documentos musicales. El resto de procesos involucrados en OMR pueden aprovechar esta simplificación para resolver sus correspondientes tareas de forma más sencilla y centrándose únicamente en la información que necesitan. Un ejemplo claro es el proceso dedicado a reconocer las áreas donde se sitúan los diferentes pentagramas. Tras obtener las coordenadas de los mismos, los pentagramas individuales pueden ser procesados para recuperar la secuencia simbólica musical que contienen y así construir una versión digital de su contenido. El trabajo de investigación que se ha realizado para completar la presente tesis se encuentra avalada por una serie de contribuciones publicadas en revistas de alto impacto y congresos internacionales. Concretamente, esta tesis contiene un conjunto de 4 artículos que se han publicado en revistas indexadas en el Journal Citation Reports y situadas en los primeros cuartiles en cuanto al factor de impacto, teniendo un total de 58 citas según Google Scholar. También se han incluido 3 comunicaciones realizadas en diferentes ediciones de un congreso internacional de Clase A según la clasificación proporcionada por GII-GRIN-SCIE. Se puede observar que las publicaciones tratan temas muy relacionados entre sí, enfocándose principalmente en el análisis de documentos orientado a OMR pero con pinceladas de transcripción de la secuencia musical y técnicas de adaptación al dominio. También hay publicaciones que demuestran que algunas de estas técnicas pueden ser aplicadas a otros tipos de imágenes de documentos, haciendo que las soluciones propuestas sean más interesantes por su capacidad de generalización y adaptación a otros contextos. Además del análisis de documentos, también se estudia cómo afectan estos procesos a la transcripción final de la notación musical, que a fin de cuentas, es el objetivo final de los sistemas OMR, pero que hasta el momento no se había investigado. Por último, debido a la incontable cantidad de información que requieren las redes neuronales para construir un modelo suficientemente robusto, también se estudia el uso de técnicas de adaptación al dominio, con la esperanza de que su éxito abra las puertas a la futura aplicabilidad de los sistemas OMR en entornos reales. Esto es especialmente interesante en el contexto de OMR debido a la gran cantidad de documentos sin datos de referencia que son necesarios para entrenar modelos de redes neuronales, por lo que una solución que aproveche las limitadas colecciones etiquetadas para procesar documentos de otra índole nos permitiría un uso más práctico de estas herramientas de transcripción automáticas. Tras la realización de esta tesis, se observa que la investigación en OMR no ha llegado al límite que la tecnología puede alcanzar y todavía hay varias vías por las que continuar explorando. De hecho, gracias al trabajo realizado, se han abierto incluso nuevos horizontes que se podrían estudiar para que algún día estos sistemas puedan ser utilizados para digitalizar y transcribir de forma automática la herencia musical escrita o impresa a gran escala y en un tiempo razonable. Entre estas nuevas líneas de investigación, podemos destacar las siguientes: · En esta tesis se han publicado contribuciones que utilizan una técnica de adaptación al dominio para realizar análisis de documentos con buenos resultados. La exploración de nuevas técnicas de adaptación al dominio podría ser clave para construir modelos de redes neuronales robustos y sin la necesidad de etiquetar manualmente una parte de todas las obras musicales que se pretenden digitalizar. · La aplicación de las técnicas de adaptación al dominio en otros procesos como en la transcripción de la secuencia musical podría facilitar el entrenamiento de modelos capaces de realizar esta tarea. Los algoritmos de aprendizaje supervisado requieren que personal cualificado se encargue de transcribir manualmente una parte de las colecciones, pero los costes temporal y económico asociados a este proceso suponen un amplio esfuerzo si el objetivo final es transcribir todo este patrimonio cultural. Por ello, sería interesante estudiar la aplicabilidad de estas técnicas con el fin de reducir drásticamente esta necesidad. · Durante la tesis, se ha estudiado cómo afecta el factor de escala de los documentos en el rendimiento de varios procesos de OMR. Además de la escala, otro factor importante que se debe tratar es la orientación, ya que las imágenes de los documentos no siempre estarán perfectamente alineadas y pueden sufrir algún tipo de rotación o deformación que provoque errores en la detección de la información. Por lo tanto, sería interesante estudiar cómo afectan estas deformaciones a la transcripción y encontrar soluciones viables para el contexto que aplica. · Como caso general y más básico, se ha estudiado cómo, con diferentes modelos de propósito general de detección de objetos, se podrían extraer los pentagramas para su posterior procesamiento. Estos elementos se han considerado rectangulares y sin rotación, pero hay que tener en cuenta que no siempre nos encontraremos con esta situación. Por lo tanto, otra posible vía de investigación sería estudiar otros tipos de modelos que permitan detectar elementos poligonales y no solo rectangulares, así como la posibilidad de detectar objetos con cierta inclinación sin introducir solapamiento entre elementos consecutivos como ocurre en algunas herramientas de etiquetado manual como la utilizada en esta tesis para la obtención de datos etiquetados para experimentación: MuRET. Estas líneas de investigación son, a priori, factibles pero es necesario realizar un proceso de exploración con el fin de detectar aquellas técnicas útiles para ser adaptadas al ámbito de OMR. Los resultados obtenidos durante la tesis señalan que es posible que estas líneas puedan aportar nuevas contribuciones en este campo, y por ende, avanzar un paso más a la aplicación práctica y real de estos sistemas a gran escala.
482

Case Studies in Document Driven Design of Scientific Computing Software

Jegatheesan, Thulasi January 2016 (has links)
The use and development of Scientific Computing Software (SCS) has become commonplace in many fields. It is used to motivate decisions and support scientific research. Software Engineering (SE) practices have been shown to improve software quality in other domains, but these practices are not commonly used in Scientific Computing (SC). Previous studies have attributed the infrequent use of SE practices to the incompatibility of traditional SE with SC development. In this research, the SE development process, Document Driven Design (DDD), and SE tools were applied to SCS using case studies. Five SCS projects were redeveloped using DDD and SE best practices. Interviews with the code owners were conducted to assess the impact of the redevelopment. The interviews revealed that development practices and the use of SE varied between the code owners. After redevelopment, the code owners agreed that a systematic development process can be beneficial, and they had a positive or neutral response to the software artifacts produced during redevelopment. The code owners, however, felt that the documentation produced by the redevelopment process requires too great a time commitment. To promote the use of SE in SCS development, SE practices must integrate well with current development practices of SC developers and not disrupt their regular workflow. Further research in this field should encourage practices that are easy to adopt by SC developers and should minimize the effort required to produce documentation. / Thesis / Master of Science (MSc)
483

A Dynamic, Interactive Approach to Learning Engineering and Mathematics

Beaulieu, Jason 17 July 2012 (has links)
The major objectives of this thesis involve the development of both dynamic and interactive applications aimed at complementing traditional engineering and science coursework, laboratory exercises, research, and providing users with easy access by publishing the applications on Wolframs Demonstration website. A number of applications have been carefully designed to meet cognitive demands as well as provide easy-to-use interactivity. Recent technology introduced by Wolfram Mathematica called CDF (Computable Document Format) provides a resource that gives ideas a communication pipeline in which technical content can be presented in an interactive format. This new and exciting technology has the potential to help students enhance depth and quality of understanding as well as provide teachers and researchers with methods to convey concepts at all levels. Our approach in helping students and researchers with teaching and understanding traditionally difficult concepts in science and engineering relies on the ability to use dynamic, interactive learning modules anywhere at any time. The strategy for developing these applications resulted in some excellent outcomes. A variety of different subjects were explored, which included; numerical integration, Green's functions and Duhamel's methods, chaotic maps, one-dimensional diffusion using numerical methods, and two-dimensional wave mechanics using analytical methods. The wide range of topics and fields of study give CDF technology a powerful edge in connecting with all types of learners through interactive learning. / Master of Science
484

Incorporating semantic and syntactic information into document representation for document clustering

Wang, Yong 06 August 2005 (has links)
Document clustering is a widely used strategy for information retrieval and text data mining. In traditional document clustering systems, documents are represented as a bag of independent words. In this project, we propose to enrich the representation of a document by incorporating semantic information and syntactic information. Semantic analysis and syntactic analysis are performed on the raw text to identify this information. A detailed survey of current research in natural language processing, syntactic analysis, and semantic analysis is provided. Our experimental results demonstrate that incorporating semantic information and syntactic information can improve the performance of our document clustering system for most of our data sets. A statistically significant improvement can be achieved when we combine both syntactic and semantic information. Our experimental results using compound words show that using only compound words does not improve the clustering performance for our data sets. When the compound words are combined with original single words, the combined feature set gets slightly better performance for most data sets. But this improvement is not statistically significant. In order to select the best clustering algorithm for our document clustering system, a comparison of several widely used clustering algorithms is performed. Although the bisecting K-means method has advantages when working with large datasets, a traditional hierarchical clustering algorithm still achieves the best performance for our small datasets.
485

A Social Description of the Damascus Document

Martens, John W. January 1986 (has links)
<p>Missing Page 56.</p> / <p>Recent Biblical scholarship has acknowledged and stressed the sociological factors at play in the formation and continuing development of religious beliefs and in the structure of religious communities. By examining the text of the Damascus Document (CD), this thesis attempts to reconstruct the social structure of the CD community, and suggests reasons for its origins and development based on the social forces which contributed to its self-definition.</p> <p>The first chapter examines the problem of deriving historical information from texts which are not strictly historical, and suggests a methodology which allows for the extraction of Social reality from religious texts. Following this, a date of origination is suggested, the historical period examined, and the origins of the community described.</p> <p>The second chapter discusses the community's self-definition, and the implications this definition and a new social situation had on their belief and community structure. An analysis of the community's response is then offered. The third chapter examines modern sectarian theory in relation to the CD community. Using the information of the previous two chapters, the CD community is discussed as a sect and compared to another sectarian movement. The conclusions deal with the community's unique role in the religious fabric of ancient Palestine, and with their common role as a sect.</p> / Master of Arts (MA)
486

Lock-based concurrency control for XML

Ahmed, Namiruddin January 2006 (has links)
No description available.
487

Development of an Interface Analysis Template for System Design Analysis

Uddin, Amad, Campean, Felician, Khan, M. Khurshid January 2015 (has links)
yes / Interface definition is an essential and integral part of systems engineering. In current practice, interface requirements or control documents are generally used to define systems or subsystems interfaces. One of the challenges with the use of such documents in product development process is the diversity in their types, methodology, contents coverage, and structure across various design levels and across multidisciplinary teams, which often impedes the design process. It is important that interface information is described with appropriate detail and minimal or no ambiguity at each design level. The purpose of this paper is to present an interface analysis template (IAT) as a structured tool and coherent methodology, built upon a critical review of existing literature concepts, with the aim of using and implementing the same template for capturing interface requirements at various levels of design starting from stakeholders' level down to component level analysis. The proposed IAT is illustrated through a desktop case study of an electric pencil sharpener, and two examples of application to automotive systems.
488

Head Tail Open: Open Tailed Classification of Imbalanced Document Data

Joshi, Chetan 23 April 2024 (has links) (PDF)
Deep learning models for scanned document image classification and form understand- ing have made significant progress in the last few years. High accuracy can be achieved by a model with the help of copious amounts of labelled training data for closed-world classification. However, very little work has been done in the domain of fine-grained and head-tailed(class imbalance with some classes having high numbers of data points and some having a low number of data points) open-world classification for documents. Our proposed method achieves a better classification results than the baseline of the head-tail-novel/open dataset. Our techniques include separating the head-tail classes and transferring the knowledge from head data to the tail data. This transfer of knowledge also improves the capability of recognizing a novel category by 15% as compared to the baseline.
489

Teaching Visual Literacy and Document Design in First-Year Composition

Brizee, Allen 02 June 2003 (has links)
Given our ability to communicate quickly and effectively through visuals such as signs and pictures, it is not surprising that graphical messages now permeate our technology-oriented culture. Magazines, television, and computers integrate text and graphics to convey information. As teachers of writing, we need to study and understand these visually enhanced texts, because they have become the standard for communication in our society. Beyond this, we should learn how to teach students about visual literacy and document design so that they can effectively interpret these visually enhanced texts and create documents that use visuals and words together; this will also prepare students for college writing and workplace writing. Naturally, there exists some uncertainty surrounding the inclusion of these ideas in first-year composition. First-year writing is already difficult to teach because colleges expect us to foster critical reading, critical thinking, and critical writing skills in students from a wide variety of disciplines. Compounding these challenges are large class sizes and shrinking budgets. However, many scholars assert that visual thinking is an essential part of the learning process and must be included in writing courses. Specifically, some scholars suggest that we should integrate visual literacy and document design into first-year composition courses to help students create effective documents for college and the workplace. This thesis explores the scholarship surrounding visual literacy, document design, and professional writing in first-year composition. The project underscores the importance of using students' visual thinking processes to help them organize and present information in college writing and beyond. / Master of Arts
490

HandText Detector AI

Qurban, Hamidullah Ehsani January 2024 (has links)
This master’s thesis explores the application of Artificial Intelligence (AI) in the digitization ofunstructured documents which contains normal text, handwritten text and also integers- a criticalaspect for infrastructure management. As digitization progresses, the efficiency in handling suchdocuments remains a considerable challenge due to their unstructured nature and variedhandwritten quality. The research evaluated several Optical Character Recognition (OCR)models, including Pytesseract, EasyOCR, KerasOCR, docTR, to identify the most effectivemethod for converting handwritten documents into digital, searchable formats. In this study, eachmodel was rigorously tested using a carefully curated dataset containing handwritten and printeddocuments of varying quality and complexity. The models were assessed based on their ability toaccurately recognize characters and words, handle multilingual documents, and process a mix ofhandwritten and printed content. Performance metrics such as Character Error Rate (CER) andWord Error Rate (WER) were used to quantify their accuracy. The results reveal that each model exhibits unique strengths. PyTesseract excelled at convertinghigh-quality images to text with minimal errors, while EasyOCR demonstrated robustrecognition across multiple languages. KerasOCR and docTR proved effective in handlingcomplex, unstructured documents due to their advanced AI architectures. By leveraging thesetechnologies, the thesis proposes an optimized approach that integrates metadata extraction toenhance the organization and searchability of digitized content. The proposed solution,compatible with both CPU and GPU platforms, reduces the time and resources required formanual processing, making it accessible for a broader audience. This research contributes to the field by offering insights into the performance of different OCRmodels and providing a practical, scalable solution for digitizing and managing unstructuredhandwritten documents. The solution promises to significantly improve the efficiency ofdocument management, paving the way for future innovations in this space.

Page generated in 0.0737 seconds