• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Ontology Learning and Information Extraction for the Semantic Web

Kavalec, Martin January 2006 (has links)
The work gives overview of its three main topics: semantic web, information extraction and ontology learning. A method for identification relevant information on web pages is described and experimentally tested on pages of companies offering products and services. The method is based on analysis of a sample web pages and their position in the Open Directory catalogue. Furthermore, a modfication of association rules mining algorithm is proposed and experimentally tested. In addition to an identification of a relation between ontology concepts, it suggest possible naming of the relation.
2

Die Dynamik des World Wide Web und Konsequenzen für die Informationssuche

Kukulenz, Dirk January 2009 (has links)
Zugl.: Lübeck, Univ., Habil.-Schr., 2009
3

Identifying Content Blocks on Web Pages using Recursive Neural Networks and DOM-tree Features / Identifiering av innehållsblock på hemsidor med rekursiva neurala nätverk och DOM-trädattribut

Riddarhaage, Teodor January 2020 (has links)
The internet is a source of abundant information spread across different web pages. The identification and extraction of information from the internet has long been an active area of research for multiple purposes relating to both research and business intelligence. However, many of the existing systems and techniques rely on assumptions that limit their general applicability and negatively affect their performance as the web changes and evolves. This work explores the use of Recursive Neural Networks (RecNNs) along with the extensive amount of features present in the DOM-trees for web pages as a technique for identifying information on the internet without the need for strict assumptions on the structure or content of web pages. Furthermore, the use of Sparse Group LASSO (SGL) is explored as an effective tool for performing feature selection in the context of web information extraction. The results show that a RecNN model outperforms a similarly structured feedforward baseline for the task of identifying cookie consent dialogs across various web pages. Furthermore, the results suggest that SGL can be used as an effective tool for feature selection of DOM-tree features.
4

A visual approach to web information extraction : Extracting information from e-commerce web pages using object detection

Brokking, Alexander January 2023 (has links)
Internets enorma omfattning har resulterat i ett överflöd av information som är oorganiserad och spridd över olika hemsidor. Det har varit motivationen för automatisk informationsextraktion av hemsidor sedan internets begynnelse. Nuvarande strategier använder främst heuristik och metoder för naturlig språkbehandling på HTML-koden för hemsidorna. Med tanke på att hemsidor utformas för att vara visuella och interaktiva för mänsklig användning utforskar denna studie potentialen för datorseendebaserade metoder för informationsextraktion från webben. I denna studie tränas och utvärderas state-of-the-art modeller för objektigenkänning i flera experiment på dataset av e-handelswebbplatser för att utvärdera modellernas potential. Resultaten indikerar att en förtränad Conditional DETR-arkitektur med en ResNet50 ryggrad kan finjusteras för att konsekvent identifiera måletiketter från nya domäner med ett mAP_50 >80%. Visuell extraktion på nya exempel inom kända domänstrukturer visade en ännu högre mAP_50 över 98%. Slutligen granskar denna studie den nuvarande litteraturen för dataset som kan användas inom visuell extraktion och belyser vikten av domänmångfald i träningsdata. Genom detta arbete ges initiala insikter i tillämpningen av datorseende inom informationsextraktion från webben, i hopp om att inspirera vidare forskning i denna riktning. / The vastness of the internet has resulted in an abundance of information that is unorganized and dispersed across numerous web pages. This has been the motivation for automatic web page extraction since the dawn of the internet era. Current strategies primarily employ heuristics and natural language processing methods to the HTML of web pages. However, considering the visual and interactive nature of web pages designed for human use, this thesis explores the potential of computer-vision-based approaches for web page extraction. In this thesis, state-of-the-art object detection models are trained and evaluated in several experiments on datasets of e-commerce websites to determine their viability. The results indicate that a pre-trained Conditional DETR architecture with a ResNet50 backbone can be fine-tuned to consistently identify target labels of new domains with an mAP_50 >80%. Visual extraction on new examples within known domain structures showed an even higher mAP_50 above 98%. Finally, this thesis surveys the state-of-the datasets that can be used for visual extraction and highlights the importance of domain diversity in the training data. Through this work, initial insights are offered into the application of computer vision in web page extraction, with the hope of inspiring further research in this direction.

Page generated in 0.1918 seconds