• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 117
  • 67
  • 20
  • 15
  • 10
  • 9
  • 8
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 403
  • 127
  • 120
  • 116
  • 112
  • 93
  • 66
  • 59
  • 58
  • 58
  • 49
  • 44
  • 41
  • 41
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Möglichkeiten des neuen WWW-Standards XML

Kreulich, Klaus 28 October 1998 (has links)
Overview about the use of XML; New possibilities for Digital Libraries; Introduction to basic concepts of XML and SGML
162

Web accessibility / Web accessibility

Strobel, Cornelia 30 September 2003 (has links) (PDF)
Workshop Mensch-Computer-Vernetzung Web Accessibility Gestaltung von Webseiten um eine Nutzung mit vielen verschiedenen Zugangsmgeräten (Scrennreader, Bildschirmlupe) und unter verschiedenen technischen Bedingungen (langsame Anbindung, veraltete Software, keine Farbe) weitestgehend uneingeschränkt zu ermöglichen.
163

Wrapper application generation for semantic web an XWRAP approach /

Han, Wei, January 2003 (has links) (PDF)
Thesis (Ph. D.)--College of Computing, Georgia Institute of Technology, 2004. Directed by Ling Liu. / Vita. Includes bibliographical references (leaves 153-158).
164

An XML-based knowledge management system of port information for U.S. Coast Guard Cutters /

Stewart, Jeffrey D. January 2003 (has links) (PDF)
Thesis (M.S. in Information Systems Technology)--Naval Postgraduate School, March 2003. / Thesis advisor(s): Magdi N. Kamel, Gordon H. Bradley. Includes bibliographical references (p. 101-103). Also available online.
165

Vers une détection des attaques de phishing et pharming côté client

Gastellier-Prevost, Sophie 24 November 2011 (has links) (PDF)
Le développement de l'Internet à haut débit et l'expansion du commerce électronique ont entraîné dans leur sillage de nouvelles attaques qui connaissent un vif succès. L'une d'entre elles est particulièrement sensible dans l'esprit collectif : celle qui s'en prend directement aux portefeuilles des Internautes. Sa version la plus répandue/connue est désignée sous le terme phishing. Majoritairement véhiculée par des campagnes de spam, cette attaque vise à voler des informations confidentielles (p.ex. identifiant, mot de passe, numéro de carte bancaire) aux utilisateurs en usurpant l'identité de sites marchands et/ou bancaires. Au fur et à mesure des années, ces attaques se sont perfectionnées jusqu'à proposer des sites webs contrefaits qui visuellement - hormis l'URL visitée - imitent à la perfection les sites originaux. Par manque de vigilance, bon nombre d'utilisateurs communiquent alors - en toute confiance - des données confidentielles. Dans une première partie de cette thèse, parmi les moyens de protection/détection existants face à ces attaques, nous nous intéressons à un mécanisme facile d'accès pour l'Internaute : les barres d'outils anti-phishing, à intégrer dans le navigateur web. La détection réalisée par ces barres d'outils s'appuie sur l'utilisation de listes noires et tests heuristiques. Parmi l'ensemble des tests heuristiques utilisés (qu'ils portent sur l'URL ou le contenu de la page web), nous cherchons à évaluer leur utilité et/ou efficacité à identifier/différencier les sites légitimes des sites de phishing. Ce travail permet notamment de distinguer les heuristiques décisifs, tout en discutant de leur pérennité. Une deuxième variante moins connue de cette attaque - le pharming - peut être considérée comme une version sophistiquée du phishing. L'objectif de l'attaque reste identique, le site web visité est tout aussi ressemblant à l'original mais - a contrario du phishing - l'URL visitée est cette fois-ci elle aussi totalement identique à l'originale. Réalisées grâce à une corruption DNS amont, ces attaques ont l'avantage de ne nécessiter aucune action de communication de la part de l'attaquant : celui-ci n'a en effet qu'à attendre la visite de l'Internaute sur son site habituel. L'absence de signes "visibles" rend donc l'attaque perpétrée particulièrement efficace et redoutable, même pour un Internaute vigilant. Certes les efforts déployés côté réseau sont considérables pour répondre à cette problématique. Néanmoins, le côté client y reste encore trop exposé et vulnérable. Dans une deuxième partie de cette thèse, par le développement de deux propositions visant à s'intégrer dans le navigateur client, nous introduisons une technique de détection de ces attaques qui couple une analyse de réponses DNS à une comparaison de pages webs. Ces deux propositions s'appuient sur l'utilisation d'éléments de référence obtenus via un serveur DNS alternatif, leur principale différence résidant dans la technique de récupération de la page web de référence. Grâce à deux phases d'expérimentation, nous démontrons la viabilité du concept proposé.
166

Microsoft Visual Studio och osCommerce - en jämförelse mellan två verktyg / Microsoft Visual Studio and osCommerce - a comparison between the two tools

Alshami, Nada, Jasem, Delal January 2014 (has links)
This report represents a comparison between two different tools used to create an online store that was developed by two students at the Technical University in Jönköping. The web shop will be of great benefit to both customers and administrator in a food firm known as Mattias' Livs. The company wanted an online store that facilitates the sale section and gives a full control of their stocks. The aim of this thesis is to create an online store that offers customers the ability to shop online and help the staff to operate the company in a simpler and more efficient way, which reduces the need for human resources and thus leads to less costs for the company. The aim also includes a comparison between the two different tools used to create the online store. At the start of work, the authors discussed the problem with the client and finally came up with a set of requirements, which our work is based on. The report includes an introduction chapter, which is divided into two different sections. The first section describes the background and problem description. While second chapter describes the purpose of the thesis and the subjects that the students will respond at the end of the report. The study compares the two different tools, which is Microsoft Visual Studio and osCommerce. Visual Studio has been the market leader in Internet-based applications since 1997. While osCommerce is a newer tool that has gained significant market share. The two tools are examined through the creation of two websites built into each tool. The results of this study show that Microsoft Visual Studio is a more efficient alternative for creating a stable web applications, but the tool has also some usability problems due to programming complexity. OsCommerce is a simple tool that has some issues with the design customization, at the same time it has a usable interface for the administrator. The web shop was published online by installing the application on a host server which means the customers can order products online.
167

Classification of HTML Documents

Xie, Wei January 2006 (has links)
Text Classification is the task of mapping a document into one or more classes based on the presence or absence of words (or features) in the document. It is intensively being studied and different classification techniques and algorithms have been developed. This thesis focuses on classification of online documents that has become more critical with the development of World Wide Web. The WWW vastly increases the availability of on-line documents in digital format and has highlighted the need to classify them. From this background, we have noted the emergence of “automatic Web Classification”. These mainly concentrate on classifying HTML-like documents into classes or categories by not only using the methods that are inherited from the traditional Text Classification process, but also utilizing the extra information provided only by Web pages. Our work is based on the fact that, Web documents, contain not only ordinary features (words) but also extra information, such as meta-data and hyperlinks that can be used to advantage the classification process. The aim of this research is to study various ways of using the extra information, in particularly, hyperlink information provided by HTML-documents (Web pages). The merit of the approach, developed in this thesis, is its simplicity, compared with existing approaches. We present different approaches of using hyperlink information to improve the effectiveness of web classification. Unlike other work in this area, we will only use the mappings between linked documents and their own class or classes. In this case, we only need to add a few features called linked-class features into the datasets, and then apply classifiers on them for classification. In the numerical experiments we adopted two wellknown Text Classification algorithms, Support Vector Machines and BoosTexter. The results obtained show that classification accuracy can be improved by using mixtures of ordinary and linked-class features. Moreover, out-links usually work better than in-links in classification. We also analyse and discuss the reasons behind this improvement. / Master of Computing
168

Classification of HTML Documents

Xie, Wei . University of Ballarat. January 2006 (has links)
Text Classification is the task of mapping a document into one or more classes based on the presence or absence of words (or features) in the document. It is intensively being studied and different classification techniques and algorithms have been developed. This thesis focuses on classification of online documents that has become more critical with the development of World Wide Web. The WWW vastly increases the availability of on-line documents in digital format and has highlighted the need to classify them. From this background, we have noted the emergence of “automatic Web Classification”. These mainly concentrate on classifying HTML-like documents into classes or categories by not only using the methods that are inherited from the traditional Text Classification process, but also utilizing the extra information provided only by Web pages. Our work is based on the fact that, Web documents, contain not only ordinary features (words) but also extra information, such as meta-data and hyperlinks that can be used to advantage the classification process. The aim of this research is to study various ways of using the extra information, in particularly, hyperlink information provided by HTML-documents (Web pages). The merit of the approach, developed in this thesis, is its simplicity, compared with existing approaches. We present different approaches of using hyperlink information to improve the effectiveness of web classification. Unlike other work in this area, we will only use the mappings between linked documents and their own class or classes. In this case, we only need to add a few features called linked-class features into the datasets, and then apply classifiers on them for classification. In the numerical experiments we adopted two wellknown Text Classification algorithms, Support Vector Machines and BoosTexter. The results obtained show that classification accuracy can be improved by using mixtures of ordinary and linked-class features. Moreover, out-links usually work better than in-links in classification. We also analyse and discuss the reasons behind this improvement. / Master of Computing
169

Scheduling on Web servers /

Chen, Yanxiao, January 1900 (has links)
Thesis (M.C.S.) - Carleton University, 2003. / Includes bibliographical references (p. 100-103). Also available in electronic format on the Internet.
170

Entwurf einer auf XML basierenden Beschreibungssprache für Benutzerschnittstellen im Kontext von Mobile-Agenten-Systemen

Baldauf, Axel. January 2003 (has links)
Konstanz, FH, Diplomarb., 2003.

Page generated in 0.0246 seconds