• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 12
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 51
  • 51
  • 12
  • 10
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Actual Accessibility: A Study of Cultural Institution Web Content

Meredith B. Rendall 10 April 2007 (has links)
In 1998, the United States Congress amended Section 508 of the Rehabilitation Act to require federal agencies to make electronic and information technology accessible. The first accessibility guidelines from the World Wide Web Consortium’s Web Accessibility Initiative, Web Content Accessibility Guidelines 1.0, were published in 1999. This study tests the usable accessibility of cultural institution web sites. Four cultural institution web sites were selected, two that were WCAG 1.0 compliant and that were not, were selected for evaluation. A combination of qualitative and quantitative analysis was conducted. Significant differences were found in the perceived usability of the guideline-compliant web sites; significance was found for one of three tasks. Overall, the guideline-compliant sites received higher usability ratings, but the task completion rates did not support a claim of greater usability.
2

System Support for Hosting Services on Server Cluster

Tseng, Chun-wei 22 July 2004 (has links)
Since the speedy popularity of Internet infrastructure and explosive increasing of World Wide Web services, a lot of traditional companies had web-enabled their business over Internet and used WWW as their application platform. Those web services, such as the on-line search, e-learning, e-commence, multimedia, network service, e-government etc., need a stable and powerful web server to support the huge capability. The web hosting service becomes increasingly important in which the service providers offer system resources to store and to provide Web access to contents from individuals, institutions, and companies, who lack resources or expertise to maintain their Web site. Web server cluster is a popular architecture used by the hosting service providers as a way to create scalable and highly available solutions. However, hosting a variety of contents from different customers on such a distributed server system faces new design and management problems. This dissertation describes the work pursuing for constructing a system to address the challenges of hosting Web content on a server cluster environment. Based on the achievement of the software-based content-aware distributor, some high repetition and fixity tasks originally treated by software module are delivered to hardware module for speeding up the packet processes and for sharing the loads on the software module. Our software-based distributor is implemented in Linux kernel as a loadable module. A few system calls in user level are built to support the integration of resource management.
3

Temporales Web-Management

Ebner, Walter 10 1900 (has links) (PDF)
In dieser Arbeit wird ein System entwickelt, welches die Zeit in das World Wide Web integriert, um einerseits die Evolution der Web-Seiten nachvollziehen und jeden vergangenen Stand wieder herstellen zu können und das andererseits eine Möglichkeit schafft, Ressourcen mit zusätzlichen temporalen Informationen auszustatten. Nach einer kurzen, den Fokus der Arbeit beschreibenden Einleitung, widmet sich das zweite Kapitel den grundsätzlichen Fragen der Zeitdimensionen im Web. Es werden Darstellungsformen von Datum und Zeit sowie die Möglichkeiten, Zeitinformationen mit bestehenden Webtechnologien zu integrieren, vorgestellt. Dazu werden Konzepte der temporalen Datenbanken auf das Web übertragen und gezeigt, dass durch Unterstützung der Transaktionszeit eine Versionierung von Web-Dokumenten ermöglicht wird. Kapitel 3 geht dann auf weiterführende Konzepte zur Darstellung von Zeit im World Wide Web ein. Es wird die Dublin Core Metadata Inititative vorgestellt und anhand von Beispielen gezeigt, wie derartige Daten in HTML-Dokumente eingebaut werden können. Im vierten Kapitel werden generelle Anforderungen an temporales Content-Management formuliert und eine Reihe von Ansätzen zur Erfüllung der Entwicklungsspezifikation beschrieben. Es wird gezeigt, dass keines der bisherigen Systeme eine wirklich zufriedenstellende Lösung anbietet. Deshalb wird in Kapitel 5 ein Prototyp eines temporalen Web-Informationssystems vorgestellt, der im Rahmen dieser Arbeit entwickelt wurde und somit deren Kernstück darstellt. (Autorenref.)
4

Problematika obsahového webu / The Issue of Content-Based Website

Sova, Martin January 2012 (has links)
The theme of the present thesis is a content-based website. The paper defines the concept presented on a model of layers functioning content-based website and analyses its functioning from the perspective of systems theory on the basis of identified major transformation functions bound to the operation of the web content. The reader will be acquainted with the model of content distribution on the website and the possibilities of financing its operations. Formulated hypotheses are testing possibilities of return on investment using specific advertising possibilities; validity of these hypotheses is then tested on the data collected during operation of the specific content sites. Than the problem of processes taking place in creating web content is further analyzed. There is a practical example of selection and implementation of an information system built to support the creation of content on a particular website: analyzing operating processes, it describes how the selection of appropriate resources and their deployment is made. The goal of this thesis is to help answer the question whether the operation of the content-based website may be financed by advertising the location of elements, identify what kind of processes are operated in content creation site, and state how to select and implement an information system to support them.
5

Centralizace a správa distribuovaných informaci / Centralization and maintenance of distributed information

Valčák, Richard January 2010 (has links)
The master’s thesis deals with the web mining, information sources, unattended access methods to these sources, summary of available methods and tools. Web data mining is a very useful tool for required information acquiring, which is used for further processing. The work is focused on the proposal of a system, which is created to gather required information from given sources. The master’s thesis consists of three parts, which employ the developed library: API, which is used by programmers, server application for gathering information in time (such an exchange rate for instance) and example of AWT application, which serves for the processing of tables available on the internet.
6

Utveckling och anpassning av webb för mobila enheter : En utvärdering och fallstudie

Sundberg, Alexander January 2016 (has links)
Den ökande användningen av mobila enheter ställer nya krav på utformning av ett företags webbtjänster. Utseende, storlek och gränssnitt på webbsidor behöver anpassas för alla typer och storlekar av mobila plattformar. Syftet med det här arbetet är att utreda de olika lösningsalternativ som finns för anpassning till mobila enheter samt föreslå och utvärdera en lösning utifrån olika kriterier. En prototyp utvecklas som kommer att testas i en fallstudie. De lösningsalternativ som utreds är native applikation, hybridapplikation, mobilsida, responsivt gränssnitt/responsiva ramverk och Web Content Management Systems. Lösningsalternativen utvärderas utifrån kriterierna funktionalitet och användbarhet, inlärningstid/utvecklingstid, plattformsstöd och tillgänglighet, prestanda, underhåll och vidareutveckling, popularitet, kostnad/licens samt konverteringsmöjlighet. Det lösningsalternativ som visar sig mest fördelaktigt enligt kriterierna är responsivt gränssnitt. I fallstudien har en prototyp utvecklats baserat på ett responsivt ramverk. Implementationen har utvärderats och bekräftar fördelarna med responsivt gränssnitt. Arbetets slutsatser lyfter fram för- och nackdelar med de olika lösningsalternativen, identifierar problem och möjligheter utifrån fallstudien – t.ex. visade det sig inte några större prestandaskillnader mot en statisk webbsida men däremot visade sig vissa konflikter mellan programbibliotek – samt slutligen ges generella rekommendationer och erfarenheter för den som står inför en mobilanpassning av en webbplats. / The increasing usage of mobile devices places new requirements on the design of company websites. Appearance, size and interface of webpages need to be adapted for all types and sizes of mobile platforms. The purpose of this thesis is to investigate the various solution options available for adaptation to mobile devices and also tosuggest and evaluate a solution based on different criteria. A prototype will be developed and tested in a case study. The investigated solution options are native application, hybrid application, mobile website, responsive web design frameworks and Web Content Management Systems. The solution options are evaluated based on the criteria functionality and usability, learning
7

Adaptive, adaptable, and mixed-initiative in interactive systems : an empirical investigation : an empirical investigation to examine the usability issues of using adaptive, adaptable and mixed-iniative approaches in interactive systems

Al Omar, Khalid Hamad January 2009 (has links)
This thesis investigates the use of static, adaptive, adaptable and mixed-initiative approaches to the personalisation of content and graphical user interfaces (GUIs). This empirical study consisted of three experimental phases. The first examined the use of static, adaptive, adaptable and mixed-initiative approaches to web content. More specifically, it measured the usability (efficiency, frequency of error occurrence, effectiveness and satisfaction) of an e-commerce website. The experiment was conducted with 60 subjects and was tested empirically by four independent groups (15 subjects each). The second experiment examined the use of adaptive, adaptable and mixed-initiative approaches to GUIs. More specifically, it measured the usability (efficiency, frequency of error occurrence, effectiveness and satisfaction) in GUI control structures (menus). In addition, it investigated empirically the effects of content size on five different personalised menu types. In order to carry out this comparative investigation, two independent experiments were conducted, on small menus (17 items) and large ones (29 items) respectively. The experiment was conducted with 60 subjects and was tested empirically by four independent groups (15 subjects each). The third experiment was conducted with 40 subjects and was tested empirically by four dependent groups (5 subjects each). The aim of the third experiment was to mitigate the drawbacks of the adaptive, adaptable and mixedinitiative approaches, to improve their performance and to increase their usability by using multimodal auditory solutions (speech, earcons and auditory icons). The results indicate that the size of content affects the usability of personalised approaches. In other words, as the size of content increases, so does the need of the adaptive and mixed-initiative approaches, whereas that of the adaptable approach decreases. A set of empirically derived guidelines were also produced to assist designers with the use of adaptive, adaptable and mixed-initiative approaches to web content and GUI control structure.
8

Combating client fingerprinting through the real-time detection and analysis of tailored web content

Born, Kenton P. January 1900 (has links)
Doctor of Philosophy / Department of Computing Science / David Gustafson / The web is no longer composed of static resources. Technology and demand have driven the web towards a complex, dynamic model that tailors content toward specific client fingerprints. Servers now commonly modify responses based on the browser, operating system, or location of the connecting client. While this information may be used for legitimate purposes, malicious adversaries can also use this information to deliver misinformation or tailored exploits. Currently, there are no tools that allow a user to detect when a response contains tailored content. Developing an easily configurable multiplexing system solved the problem of detecting tailored web content. In this solution, a custom proxy receives the initial request from a client, duplicating and modifying it in many ways to change the browser, operating system, and location-based client fingerprint. All of the requests with various client fingerprints are simultaneously sent to the server. As the responses are received back at the proxy, they are aggregated and analyzed against the original response. The results of the analysis are then sent to the user along with the original response. This process allowed the proxy to detect tailored content that was previously undetectable through casual browsing. Theoretical and empirical analysis was performed to ensure the multiplexing proxy detected tailored content at an acceptable false alarm rate. Additionally, the tool was analyzed for its ability to provide utility to open source analysts, cyber analysts, and reverse engineers. The results showed that the proxy is an essential, scalable tool that provides capabilities that were not previously available.
9

Baobab LIMS: An open source biobank laboratory information management system for resource-limited settings

Bendou, Hocine January 2019 (has links)
Philosophiae Doctor - PhD / A laboratory information management system (LIMS) is central to the informatics infrastructure that underlies biobanking activities. To date, a wide range of commercial and open source LIMS are available. The decision to opt for one LIMS over another is often influenced by the needs of the biobank clients and researchers, as well as available financial resources. However, to find a LIMS that incorporates all possible requirements of a biobank may often be a complicated endeavour. The need to implement biobank standard operation procedures as well as stimulate the use of standards for biobank data representation motivated the development of Baobab LIMS, an open source LIMS for Biobanking. Baobab LIMS comprises modules for biospecimen kit assembly, shipping of biospecimen kits, storage management, analysis requests, reporting, and invoicing. Baobab LIMS is based on the Plone web-content management framework, a server-client-based system, whereby the end user is able to access the system securely through the internet on a standard web browser, thereby eliminating the need for standalone installations on all machines. The Baobab LIMS components were tested and evaluated in three human biobanks. The testing of the LIMS modules aided in the mapping of the biobanks requirements to the LIMS functionalities, and furthermore, it helped to reveal new user suggestions, such as the enhancement of the online documentation. The user suggestions are demonstrated to be important for both LIMS strengthen and biobank sustainability. Ultimately, the practical LIMS evaluations showed the ability of Boabab LIMS to be used in the management of human biobanks operations of relatively different biobanking workflows.
10

A Study of Replicated and Distributed Web Content

John, Nitin Abraham 10 August 2002 (has links)
" With the increase in traffic on the web, popular web sites get a large number of requests. Servers at these sites are sometimes unable to handle the large number of requests and clients to such sites experience long delays. One approach to overcome this problem is the distribution or replication of content over multiple servers. This approach allows for client requests to be distributed to multiple servers. Several techniques have been suggested to direct client requests to multiple servers. We discuss these techniques. With this work we hope to study the extent and method of content replication and distribution at web sites. To understand the distribution and replication of content we ran client programs to retrieve headers and bodies of web pages and observed the changes in them over multiple requests. We also hope to understand possible problems that could face clients to such sites due to caching and standardization of newer protocols like HTTP/1.1. The main contribution of this work is to understand the actual implementation of replicated and distributed content on multiple servers and its implication for clients. Our investigations showed issues with replicated and distributed content and its effects on caching due to incorrect identifers being send by different servers serving the same content. We were able to identify web sites doing application layer switching mechanisms like DNS and HTTP redirection. Lower layers of switching needed investigation of the HTTP responses from servers, which were hampered by insuffcient tags send by servers. We find web sites employ a large amount of distribution of embedded content and its ramifcations on HTTP/1.1 need further investigation. "

Page generated in 0.0647 seconds