• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 25
  • 12
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Developing a Semantic Web Crawler to Locate OWL Documents

Koron, Ronald Dean 18 September 2012 (has links)
No description available.
22

A Distributed Approach to Crawl Domain Specific Hidden Web

Desai, Lovekeshkumar 03 August 2007 (has links)
A large amount of on-line information resides on the invisible web - web pages generated dynamically from databases and other data sources hidden from current crawlers which retrieve content only from the publicly indexable Web. Specially, they ignore the tremendous amount of high quality content "hidden" behind search forms, and pages that require authorization or prior registration in large searchable electronic databases. To extracting data from the hidden web, it is necessary to find the search forms and fill them with appropriate information to retrieve maximum relevant information. To fulfill the complex challenges that arise when attempting to search hidden web i.e. lots of analysis of search forms as well as retrieved information also, it becomes eminent to design and implement a distributed web crawler that runs on a network of workstations to extract data from hidden web. We describe the software architecture of the distributed and scalable system and also present a number of novel techniques that went into its design and implementation to extract maximum relevant data from hidden web for achieving high performance.
23

AIM - A Social Media Monitoring System for Quality Engineering

Bank, Mathias 27 June 2013 (has links) (PDF)
In the last few years the World Wide Web has dramatically changed the way people are communicating with each other. The growing availability of Social Media Systems like Internet fora, weblogs and social networks ensure that the Internet is today, what it was originally designed for: A technical platform in which all users are able to interact with each other. Nowadays, there are billions of user comments available discussing all aspects of life and the data source is still growing. This thesis investigates, whether it is possible to use this growing amount of freely provided user comments to extract quality related information. The concept is based on the observation that customers are not only posting marketing relevant information. They also publish product oriented content including positive and negative experiences. It is assumed that this information represents a valuable data source for quality analyses: The original voices of the customers promise to specify a more exact and more concrete definition of \"quality\" than the one that is available to manufacturers or market researchers today. However, the huge amount of unstructured user comments makes their evaluation very complex. It is impossible for an analysis protagonist to manually investigate the provided customer feedback. Therefore, Social Media specific algorithms have to be developed to collect, pre-process and finally analyze the data. This has been done by the Social Media monitoring system AIM (Automotive Internet Mining) that is the subject of this thesis. It investigates how manufacturers, products, product features and related opinions are discussed in order to estimate the overall product quality from the customers\\\' point of view. AIM is able to track different types of data sources using a flexible multi-agent based crawler architecture. In contrast to classical web crawlers, the multi-agent based crawler supports individual crawling policies to minimize the download of irrelevant web pages. In addition, an unsupervised wrapper induction algorithm is introduced to automatically generate content extraction parameters which are specific for the crawled Social Media systems. The extracted user comments are analyzed by different content analysis algorithms to gain a deeper insight into the discussed topics and opinions. Hereby, three different topic types are supported depending on the analysis needs. * The creation of highly reliable analysis results is realized by using a special context-aware taxonomy-based classification system. * Fast ad-hoc analyses are applied on top of classical fulltext search capabilities. * Finally, AIM supports the detection of blind-spots by using a new fuzzified hierarchical clustering algorithm. It generates topical clusters while supporting multiple topics within each user comment. All three topic types are treated in a unified way to enable an analysis protagonist to apply all methods simultaneously and in exchange. The systematically processed user comments are visualized within an easy and flexible interactive analysis frontend. Special abstraction techniques support the investigation of thousands of user comments with minimal time efforts. Hereby, specifically created indices show the relevancy and customer satisfaction of a given topic. / In den letzten Jahren hat sich das World Wide Web dramatisch verändert. War es vor einigen Jahren noch primär eine Informationsquelle, in der ein kleiner Anteil der Nutzer Inhalte veröffentlichen konnte, so hat sich daraus eine Kommunikationsplattform entwickelt, in der jeder Nutzer aktiv teilnehmen kann. Die dadurch enstehende Datenmenge behandelt jeden Aspekt des täglichen Lebens. So auch Qualitätsthemen. Die Analyse der Daten verspricht Qualitätssicherungsmaßnahmen deutlich zu verbessern. Es können dadurch Themen behandelt werden, die mit klassischen Sensoren schwer zu messen sind. Die systematische und reproduzierbare Analyse von benutzergenerierten Daten erfordert jedoch die Anpassung bestehender Tools sowie die Entwicklung neuer Social-Media spezifischer Algorithmen. Diese Arbeit schafft hierfür ein völlig neues Social Media Monitoring-System, mit dessen Hilfe ein Analyst tausende Benutzerbeiträge mit minimaler Zeitanforderung analysieren kann. Die Anwendung des Systems hat einige Vorteile aufgezeigt, die es ermöglichen, die kundengetriebene Definition von \"Qualität\" zu erkennen.
24

Entwurf eines konfigurierbaren Web-Crawler-Frameworks zur weiteren Verwendung fur Single-Hosted Media Retrieval

Zemlin, Toralf 02 October 2008 (has links) (PDF)
Diese Arbeit beschreibt ein Webcrawler-Framework für die Professur Medieninformatik der Technischen Universität Chemnitz und dessen Kernimplementierung. Der Crawler traversiert den WWW-Graph. Jedes Dokument durchläuft dabei verschiedene Module des Frameworks. Ein Schedulingmodul entscheidet über die Reihenfolge der Traversierung. Schwerpunkt dieser Entwicklung ist die Erweiterungsmöglichkeit für unterschiedliche Variationen des Datensammlers. Es wird gezeigt, welche Informationen ein Dokument für wesentliche Entscheidungen begleiten müssen. Hierzu zählen Wiedererkennung von Dokumenten, Schedulingkriterien und URL-Indexpflege. Der Framework ist konfigurierbar. Das heißt, im Kern bezieht sich die Funktion auf Crawling. Zusätzlich sind Schnittstellen für Filter- und Speicherkomponenten vorgesehen. Der Crawler verfügt über eine Administrationsschnittstelle, mit Hilfe derer er gesteuert werden kann. Weiterhin sind Status und Statistiken über Ereignisse und Fortschritte vorgesehen. Außerdem werden Testkriterien aufgezeigt und Probleme diskutiert.
25

設計與實作一個臉書粉絲頁資料抓取器 / Design and Implementation of a Facebook Fan Page Data Crawler

鄭博元, Cheng, Po Yuan Unknown Date (has links)
近年來隨著社群網路服務的盛行,臉書已成為現代人最主要的社交工具,許多名人及公司企業也都搶搭著這股風潮,紛紛在臉書上建立起粉絲頁來和粉絲們互動,而在虛擬世界和現實社會之間,兩者所互相造成的影響帶動出許多新興研究議題,透過資訊技術收集虛擬世界裡的資料,能幫助人文學者與社會科學家探索出數位科技與人文社會間的新現象。 本研究針對臉書上的粉絲頁,設計建構出一套臉書資料抓取系統,以協助學者研究分析粉絲頁的動態消息資料,本系統可幫助研究者搜尋出相關粉絲頁,並依照按讚次數排列呈現,協助挑選受歡迎的粉絲頁;讓研究者能抓取特定的粉絲頁資料,抓取到的資料經過解析後分為文章訊息、留言訊息、按讚訊息,並將結果儲存至資料庫;針對已抓取的粉絲頁,自動定時更新至最新資料。 / With the popularity of social networking services in recent years, Facebook has become a major social tool for people. Many celebrities and companies have also gone with the tide to and established a fan page on Facebook to interact with fans. The mutual influence of the virtual world and the real world drives many emerging research agenda. Using information technology to collect data in the virtual world can help the humanities scholars and social scientists to explore new phenomena between digital technology and humanities community. In this thesis, we focus on Facebook fan page data. We design and construct a Facebook fan page crawler to help scholars get data for analysis. The crawler can help researchers find the relevant fan pages along with the numbers of thumbs up and it can help researchers select fan pages. The crawler can help researchers to get the fan page data which they want by extracting post messages, comment messages, and like messages from the data and then storing the results into the database. The crawler also can set update timer to help researchers get the latest information.
26

Nástroj pro automatické kategorizování webových stránek / Automated Web Page Categorization Tool

Lat, Radek January 2014 (has links)
Tato diplomová práce popisuje návrh a implementaci nástroje pro automatickou kategorizaci webových stránek. Cílem nástroje je aby byl schopen se z ukázkových webových stránek naučit, jak každá kategorie vypadá. Poté by měl nástroj zvládnout přiřadit naučené kategorie k dříve nespatřeným webovým stránkám. Nástroj by měl podporovat více kategorií a jazyků. Pro vývoj nástroje byly použity pokročilé techniky strojového učení, detekce jazyků a dolování dat. Nástroj je založen na open source knihovnách a je napsán v jazyce Python 3.3.
27

Pokročilý robot na procházení webu / Advanced Web Crawler

Činčera, Jaroslav January 2010 (has links)
This Master's thesis describes design and implementation of advanced web crawler. This crawler can be configured by user and is designed for web browsing according to specified parameters. Can acquire and evaluate content of web pages. Its configuration is performed by creating projects which are consisting of different types of steps. User can create simple action like downloading page, form submission, etc. or can create more complex and larger projects.
28

Entwurf eines konfigurierbaren Web-Crawler-Frameworks zur weiteren Verwendung fur Single-Hosted Media Retrieval

Zemlin, Toralf 18 July 2008 (has links)
Diese Arbeit beschreibt ein Webcrawler-Framework für die Professur Medieninformatik der Technischen Universität Chemnitz und dessen Kernimplementierung. Der Crawler traversiert den WWW-Graph. Jedes Dokument durchläuft dabei verschiedene Module des Frameworks. Ein Schedulingmodul entscheidet über die Reihenfolge der Traversierung. Schwerpunkt dieser Entwicklung ist die Erweiterungsmöglichkeit für unterschiedliche Variationen des Datensammlers. Es wird gezeigt, welche Informationen ein Dokument für wesentliche Entscheidungen begleiten müssen. Hierzu zählen Wiedererkennung von Dokumenten, Schedulingkriterien und URL-Indexpflege. Der Framework ist konfigurierbar. Das heißt, im Kern bezieht sich die Funktion auf Crawling. Zusätzlich sind Schnittstellen für Filter- und Speicherkomponenten vorgesehen. Der Crawler verfügt über eine Administrationsschnittstelle, mit Hilfe derer er gesteuert werden kann. Weiterhin sind Status und Statistiken über Ereignisse und Fortschritte vorgesehen. Außerdem werden Testkriterien aufgezeigt und Probleme diskutiert.
29

Den svenska gröna bloggosfärens sociala nätverk : En kartläggning av dess nätverksstruktur och aktörer / The network structure surrounding the Swedish blogosphere : A mapping of its network structure and actors

Sjöberg, Sofia January 2016 (has links)
Sociala rörelser existerar på Internet i form av sociala nätverk med syftet att förändra ett samhälleligt tillstånd utan att utnyttja etablerade kanaler. Den här uppsatsen fokuserar på hur miljö framställs och diskuteras i ett svenskt konsumtionssamhälle genom att synliggöra det länknätverk som bygger upp det sociala nätverk som omger den svenska gröna bloggosfären. Syftet är att utforska eventuella problem vid kartläggningar av sociala nätverk samt att avgöra vilka typer av aktörer som ingår och hur dessa bidrar i vår förståelse för hur miljövänligt leverne är förankrat i en vardagskontext. Undersökningen har kartlagt och analyserat de hyperlänkar, aktörer och relationer som bygger upp det sociala nätverket och visualiserar detta i form av en nätverkskarta. Studien visar att sociala nätverk inte endast kan undersökas med hjälp av hyperlänkar utan att det krävs en mer mångfacetterad förståelse för den sociala rörelsen. Dock är kartläggningsverktyget ett bra analytiskt verktyg som hjälper forskaren att se tematiska områden och bidra med en övergripande visuell syn på nätverket. De aktörer som ingår i nätverket tycks framförallt associera sig med liknande aktörer och förhåller sig till den så kallade associeringspolitiken. Som följd indikerar studien att nätverket består av tre kluster med olika karaktärer. Ett av klustren består främst av trädgårdsrelaterade bloggar och företag och saknar en koppling till grönt leverne vilket betyder att klustret ej ingår i den bakomliggande sociala rörelsen. De resterande två klustren tyder på att grönt leverne diskuteras med ett mikro-eller makroperspektiv. Majoriteten av bloggarna har ett mikroperspektiv med inlägg utan någon förankring i bakomliggande orsaker och problem. Resterande aktörer, organisationer, tidningar och ett fåtal bloggar, har ett makroperspektiv och relaterar miljöfrågor till samhället som helhet med inlägg som är mer förankrade i fakta och information. Således tycks ett behov finnas att delvis relatera grönt leverne till personliga val ur ett mikroperspektiv med positiv ton samt att förstå de bakomliggande orsakerna ur ett makroperspektiv. / Social movements exist on the Internet under the label social network with the aim to change social conditions without using established channels. This study focuses on how the environment is discussed in a Swedish consumer society by making the link network surrounding the Swedish green blogosphere visible. The study ́s aim is to explore the possible problems in assessing a social network as well as determine the types of actors involved and how these contribute to our understanding of how green living is rooted in an everyday context. The study has identified and analysed the hyperlinks, actors and relationships that make up the social network and visualized this in the shape of a network map. The study shows that social networks can not only be investigated by means of hyperlinks if a multifaced understanding it so be achieved about the social movement. The mapping tool is, however, a good analytical instrument that helps the researcher to see the thematic areas and contribute to an overall visual view of the network. The actors included in the network seem particularly to associate themselves with similar actors and relate to the so-called politics of associations. As a result of the operators' association study indicates that the network consists of three clusters with different characters. One of the clusters mainly consist of garden related blogs and businesses and do not have a connection to green living, which means it is not included in the underlying social movement. The remaining two clusters indicate that green living is discussed with a micro or macro perspective. A majority of blogs have a micro perspective, where blogposts lack information about the underlying causes and problems. The remaining actors, organizations, newspapers and a few blogs have a macro perspective, and related environmental issues to society as a whole, with posts that are more grounded in facts and information. Thus seems to be a need to partially relate to green living, personal choice from a micro perspective positive tone and to understand the underlying causes from a macro perspective.
30

Automatizované zhromažďovanie a štrukturalizácia dát z webových zdrojov

Zahradník, Roman January 2018 (has links)
This diploma thesis deals with the creation of a solution for continuous data acquisition from web sources. The application is in charge of automatically navigating web pages, extracting data using dedicated selectors, and subsequently standardizing them for further processing for data mining.

Page generated in 0.0414 seconds