• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 34
  • 34
  • 15
  • 10
  • 10
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatisk dokumentklassificering med hjälp av maskininlärning / Automated Document Classification using Machine Learning

Dufberg, Johan January 2018 (has links)
Att manuellt hantera och klassificera stora mängder textdokument tar mycket tid och kräver mycket personal, att göra detta med hjälp av maskininlärning är för ändamålet ett alternativ. Det här arbetet önskar ge läsaren en grundläggande inblick i hur automatisk klassificering av texter fungerar, samt ge en lätt samanställning av några av de vanligt förekommande algoritmerna för ändamålet. De exempel som visas använder sig av artiklar på engelska om teknik- och finansnyheter, men arbetet har avstamp i frågan om mognadsgrad av tekniken för hantering av svenska officiella dokument. Första delen är den vetenskapliga bakgrund som den andra delen vilar på, här beskrivs flera algoritmer och tekniker som sedan används i praktiska exempel. Rapporten ämnar inte beskriva en färdig produkt, utan fungerar så som ”proof of concept” för textklassificeringens användning. Avslutningsvis diskuteras resultaten från de tester som gjorts, och en av slutsatserna är att när det finns tillräckligt med data kan en enkel klassificerare prestera nästan likvärdigt med en tekniskt sett mer utvecklad och komplex klassificerare. Relateras prestandan hos klassificeraren till tidsåtgången visar detta på att komplexa klassificerare kräver hårdvara med hög beräkningskapacitet och mycket minne för att vara gångbara. / To manually handle and classify large quantities of text documents, takes a lot of time and demands a large staff, to use machine learning for this purpose is an alternative. This thesis aims to give the reader a fundamental insight in how automatic classification of texts work and give a quick overview of the most common algorithms used for this purpose. The examples that are shown uses news articles in English about tech and finance, but the thesis takes a start in the question about how mature the technique is for handling official Swedish documents. The first part is the scientific background on which the second part rests, here several algorithms and techniques are described which is used in practice later. The report does not aim to describe a product in any form but acts as a “proof of concept” for the use of text classification. Finally, the results from the tests are discussed, and one of the conclusions drawn is that when data is abundant a relatively simple classifier can perform close to equal to a technically more developed and complex classifier. If the performance of the classifier is related to the time taken this indicates that complex classifiers need hardware with high computational power and a fair bit of memory for the classifier to be viable.
12

Extração de metadados utilizando uma ontologia de domínio / Metadata extraction using a domain ontology

Oliveira, Luis Henrique Gonçalves de January 2009 (has links)
O objetivo da Web Semântica é prover a descrição semântica dos recursos através de metadados processáveis por máquinas. Essa camada semântica estende a Web já existente agregando facilidades para a execução de pesquisas, filtragem, resumo ou intercâmbio de conhecimento de maior complexidade. Dentro deste contexto, as bibliotecas digitais são as aplicações que estão iniciando o processo de agregar anotações semânticas às informações disponíveis na Web. Uma biblioteca digital pode ser definida como uma coleção de recursos digitais selecionados segundo critérios determinados, com alguma organização lógica e de modo acessível para recuperação distribuída em rede. Para facilitar o processo de recuperação são utilizados metadados para descrever o conteúdo armazenado. Porém, a geração manual de metadados é uma tarefa complexa e que demanda tempo, além de sujeita a falhas. Portanto a extração automática ou semi-automática desses metadados seria de grande ajuda para os autores, subtraindo uma tarefa do processo de publicação de documentos. A pesquisa realizada nesta dissertação visou abordar esse problema, desenvolvendo um extrator de metadados que popula uma ontologia de documentos e classifica o documento segundo uma hierarquia pré-definida. A ontologia de documentos OntoDoc foi criada para armazenar e disponibilizar os metadados extraídos, assim como a classificação obtida para o documento. A implementação realizada focou-se em artigos científicos de Ciência da Computação e utilizou a classificação das áreas da ACM na tarefa de classificação dos documentos. Um conjunto de exemplos retirados da Biblioteca Digital da ACM foi gerado para a realização do treinamento e de experimentos sobre a implementação. As principais contribuições desta pesquisa são o modelo de extração de metadados e classificação de documentos de forma integrada e a descrição dos documentos através de metadados armazenados em um ontologia, a OntoDoc. / The main purpose of the Semantic Web is to provide machine processable metadata that describes the semantics of resources to facilitate the search, filter, condense, or negotiate knowledge for their human users. In this context, digital libraries are applications where the semantic annotation process of information available in the Web is beginning. Digital library can be defined as a collection of digital resources selected by some criteria, with some organization and available through distributed network retrieval. To facilitate the retrieval process, metadata are applied to describe stored content. However, manual metadata generation is a complex task, time-consuming and error-prone. Thus, automatic or semiautomatic metadata generation would be great help to the authors, subtracting this task from the document publishing process. The research in this work approached this problem through the developing of a metadata extractor that populates a document ontology and classify the document according to a predefined hierarchy. The document ontology OntoDoc was created to store and to make available all the extracted metadata, as well as the obtained document classification. The implementation aimed on Computer Science papers and used the ACM Computing Classification system in the document classification task. A sample set extracted from the ACM Digital Libray was generated for implementation training and validation. The main contributions of this work are the integrated metadata extraction and classification model and the description of documents through a metadata stored in an ontology.
13

A probabilistic and incremental model for online classification of documents : DV-INBC

Rodrigues, Thiago Fredes January 2016 (has links)
Recentemente, houve um aumento rápido na criação e disponibilidade de repositórios de dados, o que foi percebido nas áreas de Mineração de Dados e Aprendizagem de Máquina. Este fato deve-se principalmente à rápida criação de tais dados em redes sociais. Uma grande parte destes dados é feita de texto, e a informação armazenada neles pode descrever desde perfis de usuários a temas comuns em documentos como política, esportes e ciência, informação bastante útil para várias aplicações. Como muitos destes dados são criados em fluxos, é desejável a criação de algoritmos com capacidade de atuar em grande escala e também de forma on-line, já que tarefas como organização e exploração de grandes coleções de dados seriam beneficiadas por eles. Nesta dissertação um modelo probabilístico, on-line e incremental é apresentado, como um esforço em resolver o problema apresentado. O algoritmo possui o nome DV-INBC e é uma extensão ao algoritmo INBC. As duas principais características do DV-INBC são: a necessidade de apenas uma iteração pelos dados de treino para criar um modelo que os represente; não é necessário saber o vocabulário dos dados a priori. Logo, pouco conhecimento sobre o fluxo de dados é necessário. Para avaliar a performance do algoritmo, são apresentados testes usando datasets populares. / Recently the fields of Data Mining and Machine Learning have seen a rapid increase in the creation and availability of data repositories. This is mainly due to its rapid creation in social networks. Also, a large part of those data is made of text documents. The information stored in such texts can range from a description of a user profile to common textual topics such as politics, sports and science, information very useful for many applications. Besides, since many of this data are created in streams, scalable and on-line algorithms are desired, because tasks like organization and exploration of large document collections would be benefited by them. In this thesis an incremental, on-line and probabilistic model for document classification is presented, as an effort of tackling this problem. The algorithm is called DV-INBC and is an extension to the INBC algorithm. The two main characteristics of DV-INBC are: only a single scan over the data is necessary to create a model of it; the data vocabulary need not to be known a priori. Therefore, little knowledge about the data stream is needed. To assess its performance, tests using well known datasets are presented.
14

Extração de metadados utilizando uma ontologia de domínio / Metadata extraction using a domain ontology

Oliveira, Luis Henrique Gonçalves de January 2009 (has links)
O objetivo da Web Semântica é prover a descrição semântica dos recursos através de metadados processáveis por máquinas. Essa camada semântica estende a Web já existente agregando facilidades para a execução de pesquisas, filtragem, resumo ou intercâmbio de conhecimento de maior complexidade. Dentro deste contexto, as bibliotecas digitais são as aplicações que estão iniciando o processo de agregar anotações semânticas às informações disponíveis na Web. Uma biblioteca digital pode ser definida como uma coleção de recursos digitais selecionados segundo critérios determinados, com alguma organização lógica e de modo acessível para recuperação distribuída em rede. Para facilitar o processo de recuperação são utilizados metadados para descrever o conteúdo armazenado. Porém, a geração manual de metadados é uma tarefa complexa e que demanda tempo, além de sujeita a falhas. Portanto a extração automática ou semi-automática desses metadados seria de grande ajuda para os autores, subtraindo uma tarefa do processo de publicação de documentos. A pesquisa realizada nesta dissertação visou abordar esse problema, desenvolvendo um extrator de metadados que popula uma ontologia de documentos e classifica o documento segundo uma hierarquia pré-definida. A ontologia de documentos OntoDoc foi criada para armazenar e disponibilizar os metadados extraídos, assim como a classificação obtida para o documento. A implementação realizada focou-se em artigos científicos de Ciência da Computação e utilizou a classificação das áreas da ACM na tarefa de classificação dos documentos. Um conjunto de exemplos retirados da Biblioteca Digital da ACM foi gerado para a realização do treinamento e de experimentos sobre a implementação. As principais contribuições desta pesquisa são o modelo de extração de metadados e classificação de documentos de forma integrada e a descrição dos documentos através de metadados armazenados em um ontologia, a OntoDoc. / The main purpose of the Semantic Web is to provide machine processable metadata that describes the semantics of resources to facilitate the search, filter, condense, or negotiate knowledge for their human users. In this context, digital libraries are applications where the semantic annotation process of information available in the Web is beginning. Digital library can be defined as a collection of digital resources selected by some criteria, with some organization and available through distributed network retrieval. To facilitate the retrieval process, metadata are applied to describe stored content. However, manual metadata generation is a complex task, time-consuming and error-prone. Thus, automatic or semiautomatic metadata generation would be great help to the authors, subtracting this task from the document publishing process. The research in this work approached this problem through the developing of a metadata extractor that populates a document ontology and classify the document according to a predefined hierarchy. The document ontology OntoDoc was created to store and to make available all the extracted metadata, as well as the obtained document classification. The implementation aimed on Computer Science papers and used the ACM Computing Classification system in the document classification task. A sample set extracted from the ACM Digital Libray was generated for implementation training and validation. The main contributions of this work are the integrated metadata extraction and classification model and the description of documents through a metadata stored in an ontology.
15

A probabilistic and incremental model for online classification of documents : DV-INBC

Rodrigues, Thiago Fredes January 2016 (has links)
Recentemente, houve um aumento rápido na criação e disponibilidade de repositórios de dados, o que foi percebido nas áreas de Mineração de Dados e Aprendizagem de Máquina. Este fato deve-se principalmente à rápida criação de tais dados em redes sociais. Uma grande parte destes dados é feita de texto, e a informação armazenada neles pode descrever desde perfis de usuários a temas comuns em documentos como política, esportes e ciência, informação bastante útil para várias aplicações. Como muitos destes dados são criados em fluxos, é desejável a criação de algoritmos com capacidade de atuar em grande escala e também de forma on-line, já que tarefas como organização e exploração de grandes coleções de dados seriam beneficiadas por eles. Nesta dissertação um modelo probabilístico, on-line e incremental é apresentado, como um esforço em resolver o problema apresentado. O algoritmo possui o nome DV-INBC e é uma extensão ao algoritmo INBC. As duas principais características do DV-INBC são: a necessidade de apenas uma iteração pelos dados de treino para criar um modelo que os represente; não é necessário saber o vocabulário dos dados a priori. Logo, pouco conhecimento sobre o fluxo de dados é necessário. Para avaliar a performance do algoritmo, são apresentados testes usando datasets populares. / Recently the fields of Data Mining and Machine Learning have seen a rapid increase in the creation and availability of data repositories. This is mainly due to its rapid creation in social networks. Also, a large part of those data is made of text documents. The information stored in such texts can range from a description of a user profile to common textual topics such as politics, sports and science, information very useful for many applications. Besides, since many of this data are created in streams, scalable and on-line algorithms are desired, because tasks like organization and exploration of large document collections would be benefited by them. In this thesis an incremental, on-line and probabilistic model for document classification is presented, as an effort of tackling this problem. The algorithm is called DV-INBC and is an extension to the INBC algorithm. The two main characteristics of DV-INBC are: only a single scan over the data is necessary to create a model of it; the data vocabulary need not to be known a priori. Therefore, little knowledge about the data stream is needed. To assess its performance, tests using well known datasets are presented.
16

Automatická klasifikace smluv pro portál HlidacSmluv.cz / Automated contract classification for portal HlidacSmluv.cz

Maroušek, Jakub January 2020 (has links)
The Contracts Register is a public database containing contracts concluded by public institutions. Due to the number of documents in the database, data analysis is proble- matic. The objective of this thesis is to find a machine learning approach for sorting the contracts into categories by their area of interest (real estate services, construction, etc.) and implement the approach for usage on the web portal Hlídač státu. A large number of categories and a lack of a tagged dataset of contracts complicate the solution. 1
17

Age-Suitability Prediction for Literature Using Deep Neural Networks

Brewer, Eric Robert 30 July 2020 (has links)
Digital media holds a strong presence in society today. Providers of digital media may choose to obtain a content rating for a given media item by submitting that item to a content rating authority. That authority will then issue a content rating that denotes to which age groups that media item is appropriate. Content rating authorities serve publishers in many countries for different forms of media such as television, music, video games, and mobile applications. Content ratings allow consumers to quickly determine whether or not a given media item is suitable to their age or preference. Literature, on the other hand, remains devoid of a comparable content rating authority. If a new, human-driven rating authority for literature were to be implemented, it would be impeded by the fact that literary content is published far more rapidly than are other forms of digital media; humans working for such an authority simply would not be able to issue accurate content ratings for items of literature at their current rate of production. Thus, to provide fast, automated content ratings to items of literature (i.e., books), we propose a computer-driven rating system which predicts a book's content rating within each of seven categories: 1) crude humor/language; 2) drug, alcohol, and tobacco use; 3) kissing; 4) profanity; 5) nudity; 6) sex and intimacy; and 7) violence and horror given the text of that book. Our computer-driven system circumvents the major hindrance to any theoretical human-driven rating system previously mentioned--namely infeasibility in time spent. Our work has demonstrated that mature content of literature can be accurately predicted through the use of natural language processing and machine learning techniques.
18

Automatic Document Classification in Small Environments

McElroy, Jonathan David 01 January 2012 (has links) (PDF)
Document classification is used to sort and label documents. This gives users quicker access to relevant data. Users that work with large inflow of documents spend time filing and categorizing them to allow for easier procurement. The Automatic Classification and Document Filing (ACDF) system proposed here is designed to allow users working with files or documents to rely on the system to classify and store them with little manual attention. By using a system built on Hidden Markov Models, the documents in a smaller desktop environment are categorized with better results than the traditional Naive Bayes implementation of classification.
19

Rethinking Document Classification: A Pilot for the Application of Text Mining Techniques To Enhance Standardized Assessment Protocols for Critical Care Medical Team Transfer of Care

Walker, Briana Shanise 09 June 2017 (has links)
No description available.
20

Ανάπτυξη μεθόδων αυτόματης κατηγοριοποίησης κειμένων προσανατολισμένων στο φύλο

Αραβαντινού, Χριστίνα 15 May 2015 (has links)
Η εντυπωσιακή εξάπλωση των μέσων κοινωνικής δικτύωσης τα τελευταία χρόνια, θέτει βασικά ζητήματα τα οποία απασχολούν την ερευνητική κοινότητα. Η συγκέντρωση και οργάνωση του τεράστιου όγκου πληροφορίας βάσει θέματος, συγγραφέα, ηλικίας ή και φύλου αποτελούν χαρακτηριστικά παραδείγματα προβλημάτων που πρέπει να αντιμετωπιστούν. Η συσσώρευση παρόμοιας πληροφορίας από τα ψηφιακά ίχνη που αφήνει ο κάθε χρήστης καθώς διατυπώνει τη γνώμη του για διάφορα θέματα ή περιγράφει στιγμιότυπα από τη ζωή του δημιουργεί τάσεις, οι οποίες εξαπλώνονται ταχύτατα μέσω των tweets, των δημοσιευμάτων σε ιστολόγια (blogs) και των αναρτήσεων στο Facebook. Ιδιαίτερο ενδιαφέρον παρουσιάζει το πώς μπορεί όλη αυτή η πληροφορία να κατηγοριοποιηθεί βάσει δημογραφικών χαρακτηριστικών, όπως το φύλο ή η ηλικία. Άμεσες πληροφορίες που παρέχει ο κάθε χρήστης για τον εαυτό του, όπως επίσης και έμμεσες πληροφορίες που μπορούν να προκύψουν από την γλωσσολογική ανάλυση των κειμένων του χρήστη, αποτελούν σημαντικά δεδομένα που μπορούν να χρησιμοποιηθούν για την ανίχνευση του φύλου του συγγραφέα. Πιο συγκεκριμένα, η αναγνώριση του φύλου ενός χρήστη από δεδομένα κειμένου, μπορεί να αναχθεί σε ένα πρόβλημα κατηγοριοποίησης κειμένου. Το κείμενο υφίσταται επεξεργασία και στη συνέχεια, με τη χρήση μηχανικής μάθησης, εντοπίζεται το φύλο. Ειδικότερα, μέσω στατιστικής και γλωσσολογικής ανάλυσης των κειμένων, εξάγονται διάφορα χαρακτηριστικά (π.χ. συχνότητα εμφάνισης λέξεων, μέρη του λόγου, μήκος λέξεων, χαρακτηριστικά που συνδέονται με το περιεχόμενο κ.τ.λ.), τα οποία στη συνέχεια χρησιμοποιούνται για να γίνει η αναγνώριση του φύλου. Στην παρούσα διπλωματική εργασία σκοπός είναι η μελέτη και η ανάπτυξη ενός συστήματος κατηγοριοποίησης κειμένων ιστολογίου και ιστοσελίδων κοινωνικής δικτύωσης, βάσει του φύλου. Εξετάζεται η απόδοση διαφορετικών συνδυασμών χαρακτηριστικών και κατηγοριοποιητών στoν εντοπισμό του φύλου. / The rapid growth of social media in recent years creates important research tasks. The collection and management of the huge information available, based on topic, author, age or gender are some examples of the problems that need to be addressed. The gathering of such information from the digital traces of the users, when they express their opinions on different subjects or they describe moments of their lives, creates trends, which expand through tweets, blog posts and Facebook statuses. An interesting aspect is to classify all the available information, according to demographic characteristics, such as gender or age. The direct clues provided by the users about themselves, along with the indirect information that can come of the linguistic analysis of their texts, are useful elements that can be used for the identification of the authors’ gender. More specifically, the detection of the users’ gender from textual data can be faced as a document classification problem. The document is processed and then, machine learning techniques are applied, in order to detect the gender. The features used for the gender identification can be extracted from statistical and linguistic analysis of the document. In the present thesis, we aim to develop an automatic system for the classification of web blog and social media posts, according to their authors’ gender. We study the performance of different combinations of features and classifiers for the identification of the gender.

Page generated in 0.1662 seconds