• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 17
  • 14
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

OPIS : um método para identificação e busca de páginas-objeto / OPIS : a method for object page identifying and searching

Colpo, Miriam Pizzatto January 2014 (has links)
Páginas-objeto são páginas que representam exatamente um objeto inerente do mundo real na web, considerando um domínio específico, e a busca por essas páginas é chamada de busca-objeto. Os motores de busca convencionais (do Inglês, General Search Engine - GSE) conseguem responder, de forma satisfatória, à maioria das consultas realizadas na web atualmente, porém, isso dificilmente ocorre no caso de buscas-objeto, uma vez que, em geral, a quantidade de páginas-objeto recuperadas é bastante limitada. Essa dissertação propõe um novo método para a identificação e a busca de páginas-objeto, denominado OPIS (acrônimo para Object Page Identifying and Searching). O cerne do OPIS está na adoção de técnicas de realimentação de relevância e aprendizagem de máquina na tarefa de classificação, baseada em conteúdo, de páginas-objeto. O OPIS não descarta o uso de GSEs e, ao invés disso, em sua etapa de busca, propõe a integração de um classificador a um GSE, adicionando uma etapa de filtragem ao processo de busca tradicional. Essa abordagem permite que somente páginas identificadas como páginas-objeto sejam recuperadas pelas consultas dos usuários, melhorando, assim, os resultados de buscas-objeto. Experimentos, considerando conjuntos de dados reais, mostram que o OPIS supera o baseline com ganho médio de 47% de precisão média. / Object pages are pages that represent exactly one inherent real-world object on the web, regarding a specific domain, and the search for these pages is named as object search. General Search Engines (GSE) can satisfactorily answer most of the searches performed in the web nowadays, however, this hardly occurs with object search, since, in general, the amount of retrieved object pages is limited. This work proposes a method for both identifying and searching object pages, named OPIS (acronyms to Object Page Identifying and Searching). The kernel of OPIS is to adopt relevance feedback and machine learning techniques in the task of content-based classification of object pages. OPIS does not discard the use of GSEs and, instead, in his search step, proposes the integration of a classifier to a GSE, adding a filtering step to the traditional search process. This simple approach allows that only pages identified as object pages are retrieved by user queries, improving the results for object search. Experiments with real datasets show that OPIS outperforms the baseline with average boost of 47% considering the average precision.
22

OPIS : um método para identificação e busca de páginas-objeto / OPIS : a method for object page identifying and searching

Colpo, Miriam Pizzatto January 2014 (has links)
Páginas-objeto são páginas que representam exatamente um objeto inerente do mundo real na web, considerando um domínio específico, e a busca por essas páginas é chamada de busca-objeto. Os motores de busca convencionais (do Inglês, General Search Engine - GSE) conseguem responder, de forma satisfatória, à maioria das consultas realizadas na web atualmente, porém, isso dificilmente ocorre no caso de buscas-objeto, uma vez que, em geral, a quantidade de páginas-objeto recuperadas é bastante limitada. Essa dissertação propõe um novo método para a identificação e a busca de páginas-objeto, denominado OPIS (acrônimo para Object Page Identifying and Searching). O cerne do OPIS está na adoção de técnicas de realimentação de relevância e aprendizagem de máquina na tarefa de classificação, baseada em conteúdo, de páginas-objeto. O OPIS não descarta o uso de GSEs e, ao invés disso, em sua etapa de busca, propõe a integração de um classificador a um GSE, adicionando uma etapa de filtragem ao processo de busca tradicional. Essa abordagem permite que somente páginas identificadas como páginas-objeto sejam recuperadas pelas consultas dos usuários, melhorando, assim, os resultados de buscas-objeto. Experimentos, considerando conjuntos de dados reais, mostram que o OPIS supera o baseline com ganho médio de 47% de precisão média. / Object pages are pages that represent exactly one inherent real-world object on the web, regarding a specific domain, and the search for these pages is named as object search. General Search Engines (GSE) can satisfactorily answer most of the searches performed in the web nowadays, however, this hardly occurs with object search, since, in general, the amount of retrieved object pages is limited. This work proposes a method for both identifying and searching object pages, named OPIS (acronyms to Object Page Identifying and Searching). The kernel of OPIS is to adopt relevance feedback and machine learning techniques in the task of content-based classification of object pages. OPIS does not discard the use of GSEs and, instead, in his search step, proposes the integration of a classifier to a GSE, adding a filtering step to the traditional search process. This simple approach allows that only pages identified as object pages are retrieved by user queries, improving the results for object search. Experiments with real datasets show that OPIS outperforms the baseline with average boost of 47% considering the average precision.
23

Analýza nástrojů pro ověření přístupnosti internetových stránek dle amerického zákona Sekce 508 / Accessibility evaluation tools analysis according to U.S. law Section 508

Novák, Jiří January 2013 (has links)
The main goal of this thesis is a comparison of tools used for web page accessibility evaluation, mainly their ability to find an accessibility issues defined in US law Section 508 paragraph § 1194.22 Web-based intranet and internet information and applications. A new web page created for the tool analysis, which contains accessibility issues described in paragraph § 1194.22, has been used to evaluate how many issues the tool can find. Based on the analysis results every tool is scored and the final ranks are decided. The results of the analysis and the final ranks are the main outcome of this thesis. The theoretical part of the thesis describes accessibility in detail and introduces the main dissabilities affecting their ability to use computers. A detailed description of all Section 508 rules is present as well. It consists of translation of every rule into czech language, short description of the rule's meaning and points when the rule is met. The practical part describes the test web page, analysis proces and results in detail.
24

Segmentace stránky ve webovém prohlížeči / Page Segmentation in a Web Browser

Zubrik, Tomáš January 2021 (has links)
This thesis deals with the web page segmentation in a web browser. The implementation of Box Clustering Segmentation (BCS) method in JavaScript using an automated browser was created. The actual implementation consists of two main steps, which are the box extraction (leaf DOM nodes) from the browser context and their subsequent clustering based on the similarity model defined in BCS. Main result of this thesis is a functional implementation of BCS method usable for web page segmentation. The evaluation of the functionality and accuracy of the implementation is based on a comparison with a reference implementation created in Java.
25

Posílení konkurenceschopnosti firmy zlepšením informačního systému / Reinforcement of Company's Competitiveness by Information System Improvement

Pohl, Jan January 2010 (has links)
This thesis is about boosting the company’s competitiveness by improvement of its web pages. The first part theoretically describes all main terms and ideas about internet and web applications which are used in the next chapter. Second part is about introduction of the company and its analysis. After detailed analysis providing results for proposals is the main chapter focused on suggestion of new measures and steps how to improve the company’s position on the market. These main steps and whole benefits are summarized in the last part of this thesis.
26

Návrh informačního systému pro rezervace stolů a donáškovou službu pro moderní restaurace / Concept of Information System for Table Reservation and Delivery Service for Modern Restaurant

Bumbál, Lukáš January 2013 (has links)
The aim of this thesis is the proposal of booking a table at a restaurant and ordering food delivery system. The work is divided into 5 chapters. It contains 44 images, 8 tables and 1 appendix. The first chapter describes the objective of the work, methods and processing procedures. In the second chapter is devoted to familiarization with the booking and delivery systems that are available in the market. In the third chapter is done SWOT Analysis for the restaurant La Fiamma in Bratislava. In the fourth chapter is designed and developed the system table reservations at this restaurant and designed the ordering system food delivery as well. The next section describes the economic benefits of implementing the described system. The result of this thesis is the function module table reservations as a part of web portal the restaurant La Fiamma.
27

Automatize návrhu jednoduchých vstřikovacích forem pro nástrojárny / Design automation of simple inject moulds for tool factory

Žváček, Michal January 2008 (has links)
The dissertation is concerned with automatization proposal of simple injection moulds for plastics. The dissertation is concentrating on the process of CAD design with the aim of make it more effective and automate by the help of 3D parametric strickle board of moulds. It serves for rapid proposal of basic mechanism of the tools. Strickle board of the mould is managed by length of side moulding plate in required range from 300 to the 700mm. For managing of standardized parts was created web portal, which includes database of single parts. These parts can be simply insert to the groups of moulds by the help of hypertext links in software Catia V5. Dissertation includes description of whole system automatization and matching of the new way of CAD design with the original.
28

Tu taller

Alarcon Tomateo, Judith Soledad, Barandiaran Villaverde, Ornella Liliana, Cordova Roda, Juan Vidal, García Fernandez, Alfredo Kevin, Torero Jaymez, Alfredo 01 July 2019 (has links)
El presente proyecto se ha realizado para analizar la viabilidad de ejecución de un modelo de negocio basado en la creación de una Página Web que tiene como finalidad la búsqueda de talleres mecánicos para personas con vehículos que requieran algún servicio técnico nos muestra. Nuestras hipótesis principales se enfocan en dos grupos principales. Por un lado, los talleres y su disposición a pertenecer a la red Tutaller mediante un pago mensual. Por otro lado, si es que las personas dueñas de vehículos estarían dispuestas a crear una cuenta en nuestra plataforma. Estas dos hipótesis se validaron a través de diversas entrevistas en distritos como Chorrillos, Surquillo, Barranco y Miraflores. Hoy en día, los administradores de los talleres se basan principalmente en el marketing tradicional y en ocasiones realizan pagos para promocionarse en “páginas amarillas”. Son solo algunos los que invierten en publicidad en Facebook o Página Web. Esto genera una complicación para los talleres y para los dueños de vehículos, ya que estos últimos no tienen la información necesaria para poder asistir a un taller; lo cual, ocasiona que los talleres no generen de manera rápida nuevos clientes. La conclusión final del proyecto es que el modelo de negocio genera rentabilidad para los inversionistas. De manera más específica, se estima que en el año 2 la utilidad neta sea positiva. / The present project has been carried out to analyze the feasibility of executing a business model based on the creation of a Website, whose purpose is to search for mechanical workshops for people with vehicles that require a technical service. Our main hypotheses focus on two main groups. On the one hand, the workshops and their willingness to belong to the Tutaller network through a monthly payment. On the other hand, if the owners of vehicles would be willing to create an account on our platform. These two hypotheses were validated through various interviews in districts such as Chorrillos, Surquillo, Barranco and Miraflores. Today, the administrators of the workshops are mainly based on traditional marketing and sometimes make payments to promote themselves in "yellow pages". There are only a few who invest in advertising on Facebook or Websites. This generates a complication for the workshops and for the owners of vehicles, since the latter do not have the necessary information to be able to attend a workshop; which, causes that the workshops do not aquire new clients quickly The final conclusion of the project is that the business model generates profitability for investors. More specifically, it is estimated that in year 2 the net profit will be positive. / Trabajo de investigación
29

Mobil applikation eller responsiv webbplats? : En studie om vilka designaspekter som är viktiga vid utökning av ett söksystem på Internet till en smartphone / Mobile application or responsive website? : A study on the design aspects that are important in extending a search engine on Internet to a smartphone

Davidsson Pajala, Therese, Augustin, Ansam January 2012 (has links)
Denna uppsats redovisar en studie i hur en söktjänst på Internet kan kompletteras, antingen via en mobilapplikation eller genom en responsiv webbplats, för att underlätta användning via en smartphone. Fokus för undersökningen ligger på Riksarkivets söktjänst Nationell ArkivDatabas (NAD) som för tillfället inte är anpassad till mobila enheter. Vårt mål är även att undersöka hur inställningen ser ut för applikationer och responsiva webbplatser bland användare samt hur dessa åsikter skiljer sig mellan olika användarmålgrupper. Tillsammans med information från tidigare forskning har en studie utförs för att undersöka för- och nackdelar mellan appar respektive responsiva webbplatser. I våra undersökningar har vi valt att använda oss av två datainsamlingsmetoder: en kvantitativ webbenkät och semistrukturerade kvalitativa intervjuer, som komplement till varandra. Totalt nio intervjuer har gjorts med tre personer ur varje Riksarkivets huvudmålgrupper. En webbenkät har även publicerats på Riksarkivets och Stockholms stadsarkivs webbplatser. / This paper reports a study in how a search service on the Internet can be completed either through a mobile application or through a responsive website, to facilitate use on a smartphone. The focus of the study is on the National Archives' search service National Archives Database (NAD), which is not currently adapted to mobile devices. Our aim is also to investigate how the attitude looks for applications and responsive websites among users and how these views differ between user groups. Together with data from previous research, a study was conducted to examine the pros and cons between apps and responsive websites. In our investigations we have chosen to use two methods of data collection: a quantitative web survey and semi-structured qualitative interviews, to complement each other. A total of nine interviews were conducted with three members from each of National Archives' main target groups. An online survey has also been published on the National Archives and Stockholm stadsarkivs websites.
30

A Domain Based Approach to Crawl the Hidden Web

Pandya, Milan 04 December 2006 (has links)
There is a lot of research work being performed on indexing the Web. More and more sophisticated Web crawlers are been designed to search and index the Web faster. But all these traditional crawlers crawl only the part of Web we call “Surface Web”. They are unable to crawl the hidden portion of the Web. These traditional crawlers retrieve contents only from surface Web pages which are just a set of Web pages linked by some hyperlinks and ignoring the hidden information. Hence, they ignore tremendous amount of information hidden behind these search forms in Web pages. Most of the published research has been done to detect such searchable forms and make a systematic search over these forms. Our approach here will be based on a Web crawler that analyzes search forms and fills tem with appropriate content to retrieve maximum relevant information from the database.

Page generated in 0.068 seconds