• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 486
  • 247
  • 201
  • 191
  • 163
  • 139
  • 127
  • 112
  • 105
  • 102
  • 90
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

[en] METHOD FOR AUTOMATIC DETECTION OF STAMPS IN SCANNED DOCUMENTS USING DEEP LEARNING AND SYNTHETIC DATA GENERATION BY INSTANCE AUGMENTATION / [pt] MÉTODO PARA DETECÇÃO AUTOMÁTICA DE CARIMBOS EM DOCUMENTOS ESCANEADOS USANDO DEEP LEARNING E GERAÇÃO DE DADOS SINTÉTICOS ATRAVÉS DE INSTANCE AUGMENTATION

THALES LEVI AZEVEDO VALENTE 11 August 2022 (has links)
[pt] Documentos digitalizados em ambientes de negócios substituíram grandes volumes de papéis. Profissionais autorizados usam carimbos para certificar informações críticas nesses documentos. Muitas empresas precisam verificar o carimbo adequado de documentos de entrada e saída. Na maioria das situações de inspeção, as pessoas realizam inspeção visual para identificar carimbos. Assim sendo, a verificação manual de carimbos é cansativa, suscetível a erros e ineficiente em termos de tempo gasto e resultados esperados. Erros na verificação manual de carimbos podem gerar multas de órgãos reguladores, interrupção de operações e até mesmo comprometer fluxos de trabalho e transações financeiras. Este trabalho propõe dois métodos que combinados podem resolver esse problema, automatizando totalmente a detecção de carimbos em documentos digitalizados do mundo real. Os métodos desenvolvidos podem lidar com conjuntos de dados contendo muitos tipos de carimbos de tamanho de amostra pequena, com múltiplas sobreposições, combinações diferentes por página e dados ausentes. O primeiro método propõe uma arquitetura de rede profunda projetada a partir da relação entre os problemas identificados em carimbos do mundo real e os desafios e soluções da tarefa de detecção de objetos apontados na literatura. O segundo método propõe um novo pipeline de aumento de instâncias de conjuntos de dados de carimbos a partir de dados reais e investiga se é possível detectar tipos de carimbos com amostras insuficientes. Este trabalho avalia os hiperparâmetros da abordagem de aumento de instâncias e os resultados obtidos usando um método Deep Explainability. Foram alcançados resultados de última geração para a tarefa de detecção de carimbos combinando com sucesso esses dois métodos, alcançando 97.3 por cento de precisão e 93.2 por cento de recall. / [en] Scanned documents in business environments have replaced large volumes of papers. Authorized professionals use stamps to certify critical information in these documents. Many companies need to verify the adequate stamping of incoming and outgoing documents. In most inspection situations, people perform a visual inspection to identify stamps. Therefore, manual stamp checking is tiring, susceptible to errors, and inefficient in terms of time spent and expected results. Errors in manual checking for stamps can lead to fines from regulatory bodies, interruption of operations, and even compromise workflows and financial transactions. This work proposes two methods that combined can address this problem, by fully automating stamp detection in real-world scanned documents. The developed methods can handle datasets containing many small sample-sized types of stamps, multiples overlaps, different combinations per page, and missing data. The first method proposes a deep network architecture designed from the relationship between the problems identified in real-world stamps and the challenges and solutions of the object detection task pointed out in the literature. The second method proposes a novel instance augmentation pipeline of stamp datasets from real data to investigate whether it is possible to detect stamp types with insufficient samples. We evaluate the hyperparameters of the instance augmentation approach and the obtained results through a Deep Explainability method. We achieve state-of-the-art results for the stamp detection task by successfully combining these two methods, achieving 97.3 percent of precision and 93.2 percent of recall.
132

[pt] DESENVOLVIMENTO DE PIV ULTRA PRECISO PARA BAIXOS GRADIENTES USANDO ABORDAGEM HÍBRIDA DE CORRELAÇÃO CRUZADA E CASCATA DE REDE NEURAIS CONVOLUCIONAIS / [en] DEVELOPMENT OF ULTRA PRECISE PIV FOR LOW GRADIENTS USING HYBRID CROSS-CORRELATION AND CASCADING NEURAL NETWORK CONVOLUTIONAL APPROACH

CARLOS EDUARDO RODRIGUES CORREIA 31 January 2022 (has links)
[pt] Ao longo da história a engenharia de fluidos vem se mostrado como uma das áreas mais importantes da engenharia devido ao seu impacto nas áreas de transporte, energia e militar. A medição de campos de velocidade, por sua vez, é muito importante para estudos nas áreas de aerodinâmica e hidrodinâmica. As técnicas de medição de campo de velocidade em sua maioria são técnicas ópticas, se destacando a técnica de Particle Image Velocimetry (PIV). Por outro lado, nos últimos anos importantes avanços na área de visão computacional, baseados em redes neurais convolucionais, se mostram promissores para a melhoria do processamento das técnicas ópticas. Nesta dissertação, foi utilizada uma abordagem híbrida entre correlação cruzada e cascata de redes neurais convolucionais, para desenvolver uma nova técnica de PIV. O projeto se baseou nos últimos trabalhos de PIV com redes neurais artificiais para desenvolver a arquitetura das redes e sua forma de treinamento. Diversos formatos de cascata de redes neurais foram testados até se chegar a um formato que permitiu reduzir o erro em uma ordem de grandeza para escoamento uniforme. Além do desenvolvimento da cascata para escoamento uniforme, gerou-se conhecimento para fazer cascatas para outros tipos de escoamentos. / [en] Throughout history, fluid engineering is one of the most important areas of engineering due to its impact in the areas of transportation, energy and the military. The measurement of velocity fields is important for studies in aerodynamics and hydrodynamics. The techniques for measuring the velocity field are mostly optical techniques, with emphasis on the PIV technique. On the other hand, in recent years, important advances in computer vision, based on convolutional neural networks, have shown promise for improving the processing of optical techniques. In this work, a hybrid approach between cross-correlation and cascade of convolutional neural networks was used to develop a new PIV technique. The project was based on the latest work of PIV with an artificial neural network to develop the architecture of the networks and their form of training. Several cascade formats of neural networks were tested until they reached a format that allowed the error to be reduced by an order of magnitude for uniform flow. In addition to the development of the cascade for uniform flow, knowledge was generated to make cascades for other types of flows.
133

Finfördelad Sentimentanalys : Utvärdering av neurala nätverksmodeller och förbehandlingsmetoder med Word2Vec / Fine-grained Sentiment Analysis : Evaluation of Neural Network Models and Preprocessing Methods with Word2Vec

Phanuwat, Phutiwat January 2024 (has links)
Sentimentanalys är en teknik som syftar till att automatiskt identifiera den känslomässiga tonen i text. Vanligtvis klassificeras texten som positiv, neutral eller negativ. Nackdelen med denna indelning är att nyanser går förlorade när texten endast klassificeras i tre kategorier. En vidareutveckling av denna klassificering är att inkludera ytterligare två kategorier: mycket positiv och mycket negativ. Utmaningen med denna femklassificering är att det blir svårare att uppnå hög träffsäkerhet på grund av det ökade antalet kategorier. Detta har lett till behovet av att utforska olika metoder för att lösa problemet. Syftet med studien är därför att utvärdera olika klassificerare, såsom MLP, CNN och Bi-GRU i kombination med word2vec för att klassificera sentiment i text i fem kategorier. Studien syftar också till att utforska vilken förbehandling som ger högre träffsäkerhet för word2vec.   Utvecklingen av modellerna gjordes med hjälp av SST-datasetet, som är en känd dataset inom finfördelad sentimentanalys. För att avgöra vilken förbehandling som ger högre träffsäkerhet för word2vec, förbehandlades datasetet på fyra olika sätt. Dessa innefattar enkel förbehandling (EF), samt kombinationer av vanliga förbehandlingar som att ta bort stoppord (EF+Utan Stoppord) och lemmatisering (EF+Lemmatisering), samt en kombination av båda (EF+Utan Stoppord/Lemmatisering). Dropout användes för att hjälpa modellerna att generalisera bättre, och träningen reglerades med early stopp-teknik. För att utvärdera vilken klassificerare som ger högre träffsäkerhet, användes förbehandlingsmetoden som hade högst träffsäkerhet som identifierades, och de optimala hyperparametrarna utforskades. Måtten som användes i studien för att utvärdera träffsäkerheten är noggrannhet och F1-score.   Resultaten från studien visade att EF-metoden presterade bäst i jämförelse med de andra förbehandlingsmetoderna som utforskades. Den modell som hade högst noggrannhet och F1-score i studien var Bi-GRU. / Sentiment analysis is a technique aimed at automatically identifying the emotional tone in text. Typically, text is classified as positive, neutral, or negative. The downside of this classification is that nuances are lost when text is categorized into only three categories. An advancement of this classification is to include two additional categories: very positive and very negative. The challenge with this five-class classification is that achieving high performance becomes more difficult due to the increased number of categories. This has led to the need to explore different methods to solve the problem. Therefore, the purpose of the study is to evaluate various classifiers, such as MLP, CNN, and Bi-GRU in combination with word2vec, to classify sentiment in text into five categories. The study also aims to explore which preprocessing method yields higher performance for word2vec.   The development of the models was done using the SST dataset, which is a well-known dataset in fine-grained sentiment analysis. To determine which preprocessing method yields higher performance for word2vec, the dataset was preprocessed in four different ways. These include simple preprocessing (EF), as well as combinations of common preprocessing techniques such as removing stop words (EF+Without Stopwords) and lemmatization (EF+Lemmatization), as well as a combination of both (EF+Without Stopwords/Lemmatization). Dropout was used to help the models generalize better, and training was regulated with early stopping technique. To evaluate which classifier yields higher performance, the preprocessing method with the highest performance was used, and the optimal hyperparameters were explored. The metrics used in the study to evaluate performance are accuracy and F1-score.   The results of the study showed that the EF method performed best compared to the other preprocessing methods explored. The model with the highest accuracy and F1-score in the study was Bi-GRU.
134

Mathematical modelling of Centrosomin incorporation in Drosophila centrosomes

Bakshi, Suruchi D. January 2013 (has links)
Centrosomin (Cnn) is an integral centrosomal protein in Drosophila with orthologues in several species, including humans. The human orthologue of Cnn is required for brain development with Cnn hypothesised to play a similar role in Drosophila. Control of Cnn incorporation into centrosomes is crucial for controlling asymmetric division in certain types of Drosophila stem cells. FRAP experiments on Cnn show that Cnn recovers in a pe- culiar fashion, which suggest that Cnn may be incorporated closest to the centrioles and then spread radially outward, either diffusively or ad- vectively. The aim of this thesis is to understand the mechanism of Cnn incorporation into the Drosophila centrosomes, to determine the mode of transport of the incorporated Cnn, and to obtain parameter estimates for transport and biochemical reactions. A crucial unknown in the modelling process is the distribution of Cnn receptors. We begin by constructing coupled partial differential equation models with either diffusion or advection as the mechanism for incorpo- rated Cnn transport. The simplest receptor distribution we begin with involves a spherical, infinitesimally thick, impermeable shell. We refine the diffusion models using the insights gained from comparing the model out- put with data (gathered during mitosis) and through careful assessment of the behaviour of the data. We show that a Gaussian receptor distribution is necessary to explain the Cnn FRAP data and that the data cannot be explained by other simpler receptor distributions. We predict the exact form of the receptor distribution through data fitting and present pre- liminary experimental results from our collaborators that suggest that a protein called DSpd2 may show a matching distribution. Not only does this provide strong experimental support for a key prediction from our model, but it also suggests that DSpd2 acts as a Cnn receptor. We also show using the mitosis FRAP data that Cnn does not exhibit appreciable radial movement during mitosis, which precludes the use of these data to distinguish between diffusive and advective transport of Cnn. We use long time Cnn FRAP data gathered during S-phase for this purpose. We fit the S-phase FRAP data using the DSpd2 profiles gath- ered for time points corresponding to the Cnn FRAP experiments. We also use data from FRAP experiments where colchicine is injected into the embryos to destroy microtubules (since microtubules are suspected to play a role in advective transport of Cnn). From the analysis of all these data we show that Cnn is transported in part by advection and in part by diffusion. Thus, we are able to provide the first mechanistic description of the Cnn incorporation process. Further, we estimate parameters from the model fitting and predict how some of the parameters may be altered as nuclei progress from S-phase to mitosis. We also generate testable predic- tions regarding the control of the Cnn incorporation process. We believe that this work will be useful to understand the role of Cnn incorporation in centrosome function, particularly in asymmetrically dividing stem cells.
135

CNN vs. RT: Comparative Analysis of Media Coverage of a Malaysian Airlines Aircraft MH17 Shooting Down within the Framework of Propaganda

Olga, Lopatynska January 2015 (has links)
To explore strategic narratives of the U.S. and Russia is a motivation for this research. The study investigates whether there is a return to the Cold War rhetoric between the West and Russia, or if the discourse has taken a new form. A primary goal is to examine if media originating from the two countries spread propaganda, but mainly to detect what kind of propaganda it is. The research compares types of propaganda techniques that are most commonly applied by RT and CNN, and discusses results in a context of the Cold War propaganda prominent themes. This has been done by comparing how the two media outlets were reporting on a crash of a Malaysian Airlines aircraft in eastern Ukraine on July 17th 2014. A method of a framing analysis has been applied for a material from both channels for a period of four months. The results indicate that a number of propaganda techniques are used by both RT and CNN. Moreover, channels’ discourse is antagonistic, while strategic narratives of the U.S. and Russia nowadays have similarities and differences comparing to the Cold War times. Further research should look at other genres, events and topics reported by the two media.
136

Artificial Neural Networks for Image Improvement

Lind, Benjamin January 2017 (has links)
After a digital photo has been taken by a camera, it can be manipulated to be more appealing. Two ways of doing that are to reduce noise and to increase the saturation. With time and skills in an image manipulating program, this is usually done by hand. In this thesis, automatic image improvement based on artificial neural networks is explored and evaluated qualitatively and quantitatively. A new approach, which builds on an existing method for colorizing gray scale images is presented and its performance compared both to simpler methods and the state of the art in image denoising. Saturation is lowered and noise added to original images, which the methods receive as inputs to improve upon. The new method is shown to improve in some cases but not all, depending on the image and how it was modified before given to the method.
137

Head and Shoulder Detection using CNN and RGBD Data

El Ahmar, Wassim 18 July 2019 (has links)
Alex Krizhevsky and his colleagues changed the world of machine vision and image processing in 2012 when their deep learning model, named Alexnet, won the Im- ageNet Large Scale Visual Recognition Challenge with more than 10.8% lower error rate than their closest competitor. Ever since, deep learning approaches have been an area of extensive research for the tasks of object detection, classification, pose esti- mation, etc...This thesis presents a comprehensive analysis of different deep learning models and architectures that have delivered state of the art performances in various machine vision tasks. These models are compared to each other and their strengths and weaknesses are highlighted. We introduce a new approach for human head and shoulder detection from RGB- D data based on a combination of image processing and deep learning approaches. Candidate head-top locations(CHL) are generated from a fast and accurate image processing algorithm that operates on depth data. We propose enhancements to the CHL algorithm making it three times faster. Different deep learning models are then evaluated for the tasks of classification and detection on the candidate head-top loca- tions to regress the head bounding boxes and detect shoulder keypoints. We propose 3 different small models based on convolutional neural networks for this problem. Experimental results for different architectures of our model are highlighted. We also compare the performance of our model to mobilenet. Finally, we show the differences between using 3 types of inputs CNN models: RGB images, a 3-channel representation generated from depth data (Depth map, Multi-order depth template, and Height difference map or DMH), and a 4 channel input composed of RGB+D data.
138

PERSON RE-IDENTIFICATION & VIDEO-BASED HEART RATE ESTIMATION

Dahjung Chung (7030574) 13 August 2019 (has links)
<div> <div> <div> <p>Estimation of physiological vital signs such as the Heart Rate (HR) has attracted a lot of attention due to the increase interest in health monitoring. The most common HR estimation methods such as Photoplethysmography(PPG) require the physical contact with the subject and limit the movement of the subject. Video-based HR estimation, known as videoplethysmography (VHR), uses image/video processing techniques to estimate remotely the human HR. Even though various VHR methods have been proposed over the past 5 years, there are still challenging problems such as diverse skin tone and motion artifacts. In this thesis we present a VHR method using temporal difference filtering and small variation amplification based on the assumption that HR is the small color variations of skin, i.e. micro blushing. This method is evaluated and compared with the two previous VHR methods. Additionally, we propose the use of spatial pruning for an alternative of skin detection and homomorphic filtering for the motion artifact compensation. </p><p><br></p> <p>Intelligent video surveillance system is a crucial tool for public safety. One of the goals is to extract meaningful information efficiently from the large volume of surveillance videos. Person re-identification (ReID) is a fundamental task associated with intelligent video surveillance system. For example, ReID can be used to identity the person of interest to help law enforcement when they re-appear in the different cameras at different time. ReID can be formally defined as establishing the correspondence between images of a person taken from different cameras. Even though ReID has been intensively studied over the past years, it is still an active research area due to various challenges such as illumination variations, occlusions, view point changes and the lack of data. In this thesis we propose a weighted two stream train- ing objective function which combines the Siamese cost of the spatial and temporal streams with the objective of predicting a person’s identity. Additionally, we present a camera-aware image-to-image translation method using similarity preserving Star- GAN (SP-StarGAN) as the data augmentation for ReID. We evaluate our proposed methods on the publicly available datasets and demonstrate the efficacy of our methods.</p></div></div></div>
139

Barcode Detection and Decoding in On-line Fashion Images

Qingyu Yang (6634961) 14 May 2019 (has links)
A barcode is the representation of data including some information related to goods, offered for sale, which frequently appears in on-line fashion images. Detecting and decoding barcode has a variety of applications in the on-line marketplace. However, the existing method has limitation in detecting barcode in some backgrounds such as Tassels, strips, and texture in fashion images. So, our work focuses on identifying the barcode region and distinguishing a barcode from its patterns that are similar to it. We accomplish this by adding a post-processing technique after morphological operations. We also apply a Convolutional Neural Network (CNN) to solve this typical object detection problem. A comparison of the performance between our algorithm and a previous method will be given in our results. For decoding part, a package including current common types of decoding scheme is used in our work to decode the detected barcode. In addition, we add a pre-processing transformation step to process skewed barcode images in order to improve the probability of decoding success.
140

Classifying Material Defects with Convolutional Neural Networks and Image Processing

Heidari, Jawid January 2019 (has links)
Fantastic progress has been made within the field of machine learning and deep neural networks in the last decade. Deep convolutional neural networks (CNN) have been hugely successful in imageclassification and object detection. These networks can automate many processes in the industries and increase efficiency. One of these processes is image classification implementing various CNN-models. This thesis addressed two different approaches for solving the same problem. The first approach implemented two CNN-models to classify images. The large pre-trained VGG-model was retrained using so-called transfer learning and trained only the top layers of the network. The other model was a smaller one with customized layers. The trained models are an end-to-end solution. The input is an image, and the output is a class score. The second strategy implemented several classical image processing algorithms to detect the individual defects existed in the pictures. This method worked as a ruled based object detection algorithm. Canny edge detection algorithm, combined with two mathematical morphology concepts, made the backbone of this strategy. Sandvik Coromant operators gathered approximately 1000 microscopical images used in this thesis. Sandvik Coromant is a leading producer of high-quality metal cutting tools. During the manufacturing process occurs some unwanted defects in the products. These defects are analyzed by taking images with a conventional microscopic of 100 and 1000 zooming capability. The three essential defects investigated in this thesis defined as Por, Macro and Slits. Experiments conducted during this thesis show that CNN-models is a good approach to classify impurities and defects in the metal industry, the potential is high. The validation accuracy reached circa 90 percentage, and the final evaluation accuracy was around 95 percentage , which is an acceptable result. The pretrained VGG-model reached a much higher accuracy than the customized model. The Canny edge detection algorithm combined dilation and erosion and contour detection produced a good result. It detected the majority of the defects existed in the images.

Page generated in 0.0451 seconds