• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 17
  • 14
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Interaktivní tvorba formulářů v JSP / Interactive Form Design in JSP

Kaluža, Vlastimil Unknown Date (has links)
This project is interested in creating application for generating dynamic web pages which are connected to database. Main intention of this project is concentrated to set out basic parts and create class diagram of this application. Final application will be used for simplification of web pages design for non-IT users. Final application represents concept for any extension and modification in the future.
42

Segmentace webových stránek s využitím shlukovacích technik / Web page segmentation utilizing clustering techniques

Zelený, Jan January 2017 (has links)
Získávání informací a jiné techniky dolování dat z webových stránek získávají na důležitosti s tím, jak se rozvíjí webové technologie a jak roste množství informací uložených na webu, jakožto jediném nosiči těchto informací. Spolu s tímto množství informací také ale roste množství obsahu, který není v kontextu prezentovaných informací ničím důležitý. To je jedním z důvodů, proč je důležité se intenzivně věnovat předzpracování informací uložených na webu. Segmentační algoritmy jsou jedním z možných způsobů předzpracování. Tato práce se věnuje využití shlukovacích technik pro zefektivnění existujících, ale i nalezení zcela nových algoritmů použitelných pro segmentaci webových stránek.
43

Indexation et interrogation de pages web décomposées en blocs visuels

Faessel, Nicolas 14 June 2011 (has links)
Cette thèse porte sur l'indexation et l'interrogation de pages Web. Dans ce cadre, nous proposons un nouveau modèle : BlockWeb, qui s'appuie sur une décomposition de pages Web en une hiérarchie de blocs visuels. Ce modèle prend en compte, l'importance visuelle de chaque bloc et la perméabilité des blocs au contenu de leurs blocs voisins dans la page. Les avantages de cette décomposition sont multiples en terme d'indexation et d'interrogation. Elle permet notamment d'effectuer une interrogation à une granularité plus fine que la page : les blocs les plus similaires à une requête peuvent être renvoyés à la place de la page complète. Une page est représentée sous forme d'un graphe acyclique orienté dont chaque nœud est associé à un bloc et étiqueté par l'importance de ce bloc et chaque arc est étiqueté la perméabilité du bloc cible au bloc source. Afin de construire ce graphe à partir de la représentation en arbre de blocs d'une page, nous proposons un nouveau langage : XIML (acronyme de XML Indexing Management Language), qui est un langage de règles à la façon de XSLT. Nous avons expérimenté notre modèle sur deux applications distinctes : la recherche du meilleur point d'entrée sur un corpus d'articles de journaux électroniques et l'indexation et la recherche d'images sur un corpus de la campagne d'ImagEval 2006. Nous en présentons les résultats. / This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments.
44

Identificando o Tópico de Páginas Web / Identifying the topic of Web Pages

Lima, Márcia Sampaio 24 April 2009 (has links)
Made available in DSpace on 2015-04-11T14:03:16Z (GMT). No. of bitstreams: 1 DISSERTACAO MARCIA.pdf: 794477 bytes, checksum: 2cef05b5eceb08ee3829eec46ac4a278 (MD5) Previous issue date: 2009-04-24 / Fundação de Amparo à Pesquisa do Estado do Amazonas / Textual and structural sources of evidences extracted from web pages are frequently used to improve the results of Information Retrieval (IR) systems. The main topic of a web page is a textual source of evidence that has a wide applicability in IR systems. It can be used as a new source of evidence to improve ranking results, page classification, filtering, among other applications. In this work, we propose to study, develop and evaluate a method to identify the main topic of a web page using a combination of different sources of evidences. We define the main topic of a web page as a set of, at most, five distinct keywords related to the main subject of the page. In general, the proposed method, is divided in four distinct phases: (1) identification of the keywords that describe the web page content, using multiple sources of evidences; (2) use of a genetic algorithm to combine the sources of evidences; (3) definition of the three better keywords of the page; and (4) use of a web directory to identify the page main topic. The results of the experiments show that: (1) the best source of evidence used to describe the keywords of a web page is the content link; (2) the proposed method is efficient to identify the main topic of a web page: 0.9129, in a scale of zero to one; and (3) the proposed method is also efficient to automatic classify web pages within the Google directory, reaching 88%±0.11 of precision in the classification task. / Evidências textuais e estruturais que podem ser extraídas dos documentos web são frequentemente usadas na busca pela melhoria da qualidade dos resultados obtidos pelos diversos sistemas de recuperação de informação (RI). O tópico de uma página web é uma evidência textual que possui uma vasta aplicabilidade nesses sistemas, podendo servir como uma nova fonte de evidência para melhorar ranking de páginas web, melhorar sistemas de classificação e filtragem destas páginas, entre outros. O presente trabalho tem por objetivo estudar, desenvolver e avaliar um método para identificar automaticamente o tópico de páginas web através da combinação de diferentes fontes de evidências. Definimos o tópico de uma página como sendo um conjunto de, no máximo, cinco termos distintos relacionadas ao assunto principal da página. Em linhas gerais, o método de identificação de tópicos proposto nesta dissertação, está dividido em quatro fases distintas: (1) identificação dos possíveis termos descritores de uma página web, fazendo uso de múltiplas fontes de evidências; (2) utilização de um algoritmo genético na combinação das fontes de evidências usadas; (3) definição dos três melhores termos descritores da página; e (4) utilização da estrutura hierárquica de um diretório abrangente e popular da web com o objetivo de identificar o tópico da referida página. Os resultados obtidos nos experimentos realizados para avaliar o método proposto foram os seguintes: (1) alto grau de importância do uso da concatenação do texto de âncora de links na descoberta dos termos descritores de uma página web; (2) boa avaliação da eficiência do método proposto na identificação de tópicos de páginas web: 0.9129, em uma escala de zero a um; e (3) boa avaliação da utilização de parte do método proposto na classificação automática de páginas web na estrutura hierárquica do diretório Google, atingindo 88%±0.11 de acertos das páginas classificadas. Os experimentos realizados demonstram que o modelo proposto é útil na identificação do tópico de uma página web e também na classificação de páginas na estrutura hierárquica do diretório Google.
45

The usability of a computer-based Statistics Data and Story Library in the South African context

Basson, Elizabeth Maria 04 February 2002 (has links)
Vista University is known in South Africa as a historically disadvantaged or black university. It is a multi-campus university (it has eight campuses throughout South Africa) and caters for learners from historically disadvantaged backgrounds. The Department of Mathematics and Statistics holds an annual meeting to coordinate the activities in the department across all eight campuses. Attendance is compulsory for all lecturers from all the campuses. Every year the same problem arises, which is to have examination papers drawn up that will be of a uniform standard across all the campuses. It is a very frustrating task for the compiler of the papers to get contributions from the lecturers that are submitted on time, in the agreed format and of an acceptable standard. During the 2000 meeting it was unanimously agreed that the long-term solution to the problem would be a database of questions in the agreed format and of an acceptable standard. Because the lecturers are spread over South Africa, this database must be available through Vista’s Intranet. The development of such a product would involve a great deal of time and energy, and the most important question to ask is whether the lecturers would use the product. The solution is to design a prototype of the product: a database with a Web-based portal populated with a sample of questions. The usability of such a database must be determined to ensure the effectiveness of the final product. The aim of this study is, after a prototype of a Web-based Statistical Data and Story Library in the South African Context (in future referred to as SSS) has been implemented, to determine the usability of the product. Copyright 2001, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Basson, EM 2001, The usability of a computer-based Statistics Data and Story Library in the South African context, MEd dissertation, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-02042002-094953 / > / Thesis (MEd)--University of Pretoria, 2001. / Curriculum Studies / MEd / Unrestricted
46

Segmentace webových stránek s využitím shlukování / Web Page Segmentation Algorithms Based on Clustering

Lengál, Tomáš January 2017 (has links)
This report deals with segmentation of web pages, which is important discipline of information extraction. In the first part, we describe several general ways to implement it. After that we introduce method Box Clustering Segmentation, which comes with a slightly different approach towards segmentation. In the second half, we describe implementation of this method as a part of framework FITLayout and final testing.
47

Sémantická analýza webového obsahu / Semantic Analysis of Web Content

Hubl, Lukáš January 2020 (has links)
This work deals with the topics of semantic web, web page segmentation and technologies, which are used in this area. It also deals with a modification of one web page segmentation method, specifically DOM-based segmentation, using semantic web technologies. Thus, this work designs the way of web page segmentation based on semantic analysis of individual elements of the web pages content. An application that demonstrates the functionality of the designed segmentation method was also created within this work. With the implemented application, experiments were performed, whose results are also part of this work.
48

Analýza příležitosti a proveditelnosti nového internetového portálu včetně realizace / Opportunities and Feasibility Analysis of New Internet Portal Including Realization

Kuklínek, David January 2012 (has links)
The main intention of the Master’s thesis is analysis of the feasibility and software solution of specific insertion portal. The analysis of the feasibility researches, particularly competition forces, potential users, financial analysis and prognosis of future growth and development. The realization part contains software solution and its implementation. All outputs of the Master’s thesis are used as basis for growing project Flatsharing.
49

Automatizovaná navigace na privátních stránkách / Automatic Navigation on Private Websites

Kliment, Radek January 2012 (has links)
This thesis deals with technologies related to web pages and describes the navigation across them including the authentication to access their private sections and the user context management. It introduces the design of the mechanism for the automated navigation including new scripting language and tools for the visual description. The work also contains the design of the application using the mechanism and the implementation of its parts. The last chapter sums up the knowledge acquired by testing on various websites.
50

Metody klasifikace webových stránek / Methods of Web Page Classification

Nachtnebl, Viktor January 2012 (has links)
This work deals with methods of web page classification. It explains the concept of classification and different features of web pages used for their classification. Further it analyses representation of a page and in detail describes classification method that deals with hierarchical category model and is able to dynamically create new categories. In the second half it shows implementation of chosen method and describes the results.

Page generated in 0.0735 seconds