• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assisted Annotation of Sequential Image Data With CNN and Pixel Tracking / Assisterande annotering av sekvensiell bilddata med CNN och pixelspårning

Chan, Jenny January 2021 (has links)
In this master thesis, different neural networks have investigated annotating objects in video streams with partially annotated data as input. Annotation in this thesis is referring to bounding boxes around the targeted objects. Two different methods have been used ROLO and GOTURN, object detection with tracking respective object tracking with pixels. The data set used for validation is surveillance footage consists of varying image resolution, image size and sequence length. Modifications of the original models have been executed to fit the test data.  Promising results for modified GOTURN were shown, where the partially annotated data was used as assistance in tracking. The model is robust and provides sufficiently accurate object detections for practical use. With the new model, human resources for image annotation can be reduced by at least half. / I detta examensarbete har olika neurala nätverk undersökts för att annotera objekt i videoströmmar med partiellt annoterade data som indata. Annotering i denna uppsats syftar på avgränsninglådor runt de eftertraktade objekten. Två olika metoder har använts ROLO och GOTURN, objektdetektering med spårning respektive objektspårning av pixlar. Datasetet som användes för validering är videoströmmar från övervakningskameror i varierande bildupplösning, bildstorlek och sekvenslängd. Modifieringar av ursprungsmodellerna har utförts för att anpassa testdatat. Lovande resultat för modifierade GOTURN visades, där den partiella annoterade datan användes som assistans vid spårning. Modellen är robust och ger tillräckligt noggranna objektdetektioner för praktiskt bruk. Med den nya modellen kan mänskliga resurser för bild annotering reduceras med minst hälften.
2

Automatic Classification of Conditions for Grants in Appropriation Directions of Government Agencies

Wallerö, Emma January 2022 (has links)
This study explores the possibilities of classifying language as governing or not. The ground premise is to examine how detecting and quantifying governing conditions from thousands of financial grants in appropriation directions can be performed automatically, as well as creating a data set to perform machine learning for this text classification task. In this study, automatic classification is performed along with an annotation process extracting and labelling data. Automatic classification can be performed by using a variety of data, methods and tasks. The classification task aims to mainly divide conditions into being governing of the conducting of the specific agency or not. The data consists of text from the specific chapter in the appropriation directions regarding financial grants. The text is split into sentences, keeping only sentences longer than 15 words. An iterative annotation process is then performed in order to receive labelled conditions, involving three expert annotators for the final data set, and laymen annotations for initial experiments. Given the data extracted from the annotation process, SVM, BiLSTM and KB-BERT classifiers are trained and evaluated. All models are evaluated using no context information, with bullet points as an exception, where a previous, generally descriptive sentence is included. Apart from this default input representation type, context regarding preceding sentence along with the target sentence, as well as adding specific agency to the target sentence are evaluated as alternative data representation types. The final inter-annotator agreement was not optimal with Cohen’s Kappa scores that can be interpreted as representing moderate agreement. By using majority vote for the test set, the non-optimal agreement was somewhat prevented for this specific set. The best performing model all input representation types considered was the KB-BERT using no context information, receiving an F1-score on 0.81 and an accuracy score on 0.89 on the test set. All models gave a better performance for sentences classed as governing, which might be partially due to the final annotated data sets being skewed. Possible future studies include further iterative annotation and working towards a clear and as objective definition of how a governing condition can be defined, as well as exploring the possibilities of using data augmentation to counteract the uneven distribution of classes in the final data sets.
3

Review and Analysis of single-cell RNA sequencing cell-type identification and annotation tools / Granskning och Analys av enkelcells-RNA-sekvenseringsverktyg för identifiering och annotering av celltyper

Raoux, Corentin January 2021 (has links)
Single-cell RNA-sequencing makes possible to study the gene expression at the level of individual cells. However, one of the main challenges of the single-cell RNA-sequencing analysis today, is the identification and annotation of cell types. The current method consists in manually checking the expression of genes using top differentially expressed genes and comparing them with related cell-type markers available in scientific publications. It is therefore time-consuming and labour intensive. Nevertheless, in the last two years,numerous automatic cell-type identification and annotation tools which use different strategies have been created. But, the lack of specific comparisons of those tools in the literature and especially for immuno-oncologic and oncologic purposes makes difficult for laboratories and companies to know objectively what are the best tools for annotating cell types. In this project, a review of the current tools and an evaluation of R tools were carried out.The annotation performance, the computation time and the ease of use were assessed. After this preliminary results, the best selected R tools seem to be ClustifyR (fast and rather precise) and SingleR (precise) for the correlation-based tools, and SingleCellNet (precise and rather fast) and scPred (precise but a lot of cell types remains unassigned) for the supervised classificationtools. Finally, for the marker-based tools, MAESTRO and SCINA are rather robust if they are provided with high quality markers.
4

Data Collection and Layout Analysis on Visually Rich Documents using Multi-Modular Deep Learning.

Stahre, Mattias January 2022 (has links)
The use of Deep Learning methods for Document Understanding has been embraced by the research community in recent years. A requirement for Deep Learning methods and especially Transformer Networks, is access to large datasets. The objective of this thesis was to evaluate a state-of-the-art model for Document Layout Analysis on a public and custom dataset. Additionally, the objective was to build a pipeline for building a dataset specifically for Visually Rich Documents. The research methodology consisted of a literature study to find the state-of-the-art model for Document Layout Analysis and a relevant dataset used to evaluate the chosen model. The literature study also included research on how existing datasets in the domain were collected and processed. Finally, an evaluation framework was created. The evaluation showed that the chosen multi-modal transformer network, LayoutLMv2, performed well on the Docbank dataset. The custom build dataset was limited by class imbalance, although good performance for the larger classes. The annotator tool and its auto-tagging feature performed well and the proposed pipelined showed great promise for creating datasets with Visually Rich Documents. In conclusion, this thesis project answers the research questions and suggests two main opportunities. The first is to encourage others to build datasets with Visually Rich Documents using a similar pipeline to the one presented in this paper. The second is to evaluate the possibility of creating the visual token information for LayoutLMv2 as part of the transformer network rather than using a separate CNN. / Användningen av Deep Learning-metoder för dokumentförståelse har anammats av forskarvärlden de senaste åren. Ett krav för Deep Learning-metoder och speciellt Transformer Networks är tillgång till stora datamängder. Syftet med denna avhandling var att utvärdera en state-of-the-art modell för analys av dokumentlayout på en offentligt tillgängligt dataset. Dessutom var målet att bygga en pipeline för att bygga en dataset specifikt för Visuallt Rika Dokument. Forskningsmetodiken bestod av en litteraturstudie för att hitta modellen för Document Layout Analys och ett relevant dataset som användes för att utvärdera den valda modellen. Litteraturstudien omfattade också forskning om hur befintliga dataset i domänen samlades in och bearbetades. Slutligen skapades en utvärderingsram. Utvärderingen visade att det valda multimodala transformatornätverket, LayoutLMv2, fungerade bra på Docbank-datasetet. Den skapade datasetet begränsades av klassobalans även om bra prestanda för de större klasserna erhölls. Annotatorverktyget och dess autotaggningsfunktion fungerade bra och den föreslagna pipelinen visade sig vara mycket lovande för att skapa dataset med VVisuallt Rika Dokument.svis besvarar detta examensarbete forskningsfrågorna och föreslår två huvudsakliga möjligheter. Den första är att uppmuntra andra att bygga datauppsättningar med Visuallt Rika Dokument med en liknande pipeline som den som presenteras i denna uppsats. Det andra är att utvärdera möjligheten att skapa den visuella tokeninformationen för LayoutLMv2 som en del av transformatornätverket snarare än att använda en separat CNN.

Page generated in 0.0606 seconds