• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 28
  • 18
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 23
  • 22
  • 20
  • 18
  • 17
  • 16
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Study of Learning Activity Supporting Mechanism on LMS

Chuang, Chia-Yi 08 September 2004 (has links)
The teaching itself involves the factors , such as environment to learn theory , educational psychology , and study ,etc.. A high-quality network teaching website must combine the education expert and specialized information engineering personnel with education speciality , doing the part of one's own speciality each, the education expert designs the teaching schemes needed on the teaching platform, the information expert offers a function interface apt to use to a education expert , do not look for only suitable teaching theory in the specific field to design the system at will . Probe into the teaching activity type that the regular meeting uses the education expert's teaching scheme mainly in this research, including webpage teaching material , person who comment test, on-line homework , topic discuss and is it study as how 's merger give teacher make it plan the teaching activity not to used for activity to cooperate in, and do further treatment to teacher's designed teaching scheme. Not only can protect the advantage with network teaching like this , can also avoid the shortcoming on network teaching , for instance: (1)The beginner is apt to lose the study direction, result in studying the setback ; (2)The aimless one is had a look around, it is unable to build and construct the intact knowledge structure; (3)Cognitive question that overloads ; (4)The knowledge structure is difficult with question that is combined ,etc.. So this research is made in fact: First, offer a interface of course progress to a teacher's editor, the teaching activity of planning course that enables teacher easily , implement teaching tactics of teacher in order to reach automation , teaching of goal arrange system. Second, the inter dynamic increasing between the system and user is designed. After teacher establish course arrange, information notify system will convey teacher or message that student correlated with to in accordance with different situation in advance.
2

WebPET: A Performance Evaluation Tool for Web Servers

Chen, Yin-chun 10 September 2006 (has links)
Because of the development of the Internet, there are more and more users. Various kinds of services are provided by companies. They are usually web based for easy use. The performance of web servers is a key factor of the quality of services. In a user¡¦s view, the response time is a important metric for the performance of a web server, so we implement a passive performance evaluation tool for web servers in this paper. In this paper we discussed phenomenons when bottlenecks at a web server occur. We make experiments to show how bottlenecks in the CPU, network, and in the Disk can affect the response time. The result shows that response time is affected in different way according to the type of the request when the bottleneck is CPU, network or disk.
3

A comparative evaluation of Web server systems: taxonomy and performance

Ganeshan, Manikandaprabhu 29 March 2006 (has links)
The Internet is an essential resource to an ever-increasing number of businesses and home users. Internet access is increasing dramatically and hence, the need for efficient and effective Web server systems is on the rise. These systems are information engines that are accessed through the Internet by a rapidly growing client base. These systems are expected to provide good performance and high availability to the end user. They are also resilient to failures at both the hardware and software levels. These characteristics make them suitable for servicing the present and future information demands of the end consumer. In recent years, researchers have concentrated on taxonomies of scalable Web server system architectures, and routing and dispatching algorithms for request distribution. However, they have not focused on the classification of commercial products and prototypes, which would be of use to business professionals and software architects. Such a classification would help in selecting appropriate products from the market, based on product characteristics, and designing new products with different combinations of server architectures and dispatching algorithms. Currently, dispatching algorithms are classified as content-blind, content-aware, and Domain Name Server (DNS) scheduling. These classifications are extended, and organized under one tree structure in this thesis. With the help of this extension, this thesis develops a unified product-based taxonomy that identifies product capabilities by relating them to a classification of scalable Web server systems and to the extended taxonomy of dispatching algorithms. As part of a detailed analysis of Web server systems, generic queuing models, which consist of a dispatcher unit and a Web server unit are built. Some performance metrics, such as throughput, server performance, mean queue size, mean waiting time, mean service time and mean response time of these generic queuing models are measured for evaluation. Finally, the correctness of generic queuing models are evaluated with the help of theoretical and simulation analysis. / May 2005
4

A comparative evaluation of Web server systems: taxonomy and performance

Ganeshan, Manikandaprabhu 29 March 2006 (has links)
The Internet is an essential resource to an ever-increasing number of businesses and home users. Internet access is increasing dramatically and hence, the need for efficient and effective Web server systems is on the rise. These systems are information engines that are accessed through the Internet by a rapidly growing client base. These systems are expected to provide good performance and high availability to the end user. They are also resilient to failures at both the hardware and software levels. These characteristics make them suitable for servicing the present and future information demands of the end consumer. In recent years, researchers have concentrated on taxonomies of scalable Web server system architectures, and routing and dispatching algorithms for request distribution. However, they have not focused on the classification of commercial products and prototypes, which would be of use to business professionals and software architects. Such a classification would help in selecting appropriate products from the market, based on product characteristics, and designing new products with different combinations of server architectures and dispatching algorithms. Currently, dispatching algorithms are classified as content-blind, content-aware, and Domain Name Server (DNS) scheduling. These classifications are extended, and organized under one tree structure in this thesis. With the help of this extension, this thesis develops a unified product-based taxonomy that identifies product capabilities by relating them to a classification of scalable Web server systems and to the extended taxonomy of dispatching algorithms. As part of a detailed analysis of Web server systems, generic queuing models, which consist of a dispatcher unit and a Web server unit are built. Some performance metrics, such as throughput, server performance, mean queue size, mean waiting time, mean service time and mean response time of these generic queuing models are measured for evaluation. Finally, the correctness of generic queuing models are evaluated with the help of theoretical and simulation analysis.
5

A comparative evaluation of Web server systems: taxonomy and performance

Ganeshan, Manikandaprabhu 29 March 2006 (has links)
The Internet is an essential resource to an ever-increasing number of businesses and home users. Internet access is increasing dramatically and hence, the need for efficient and effective Web server systems is on the rise. These systems are information engines that are accessed through the Internet by a rapidly growing client base. These systems are expected to provide good performance and high availability to the end user. They are also resilient to failures at both the hardware and software levels. These characteristics make them suitable for servicing the present and future information demands of the end consumer. In recent years, researchers have concentrated on taxonomies of scalable Web server system architectures, and routing and dispatching algorithms for request distribution. However, they have not focused on the classification of commercial products and prototypes, which would be of use to business professionals and software architects. Such a classification would help in selecting appropriate products from the market, based on product characteristics, and designing new products with different combinations of server architectures and dispatching algorithms. Currently, dispatching algorithms are classified as content-blind, content-aware, and Domain Name Server (DNS) scheduling. These classifications are extended, and organized under one tree structure in this thesis. With the help of this extension, this thesis develops a unified product-based taxonomy that identifies product capabilities by relating them to a classification of scalable Web server systems and to the extended taxonomy of dispatching algorithms. As part of a detailed analysis of Web server systems, generic queuing models, which consist of a dispatcher unit and a Web server unit are built. Some performance metrics, such as throughput, server performance, mean queue size, mean waiting time, mean service time and mean response time of these generic queuing models are measured for evaluation. Finally, the correctness of generic queuing models are evaluated with the help of theoretical and simulation analysis.
6

An Investigation into the Applicability of Node.js as a Platform forWeb Services

Torstensson, Daniel, Eloff, Erik January 2012 (has links)
This study investigates the applicability of node.js for developing web services.Node.js is a software platform for developing event-driven networking applicationsusing JavaScript. Moreover, the language JavaScript is discussed regardingfeatures that facilitate development of event-driven software.Node.js’s selling point is to be a solution to the problem of massive amount ofconcurrent network connections. In addition, it tries to avoid scalability issuesthat may appear in large web applications. To verify and investigate if this holds,an evaluation of the platform was conducted by developing an HTTP boot serverfor Motorola Mobility. The boot server, named Wellington, is used to manageconfiguration and distribution of set-top box software.Furthermore, an investigation and comparison between event based and threadedconcurrency models has been made. Lastly, the maturity of node.js and its ecosystemof libraries and frameworks are discussed.In conclusion, node.js is an interesting piece of technology and it was suitableas development platform for Wellington. JavaScript is a powerful language andworks well to write event-driven server-side software. When learning to buildnetworking applications, node.js is a good start to do so using an event-drivenparadigm.
7

Improving web server efficiency on commodity hardware

Beltrán Querol, Vicenç 03 October 2008 (has links)
El ràpid creixement de la Web requereix una gran quantitat de recursos computacionals que han de ser utilitzats eficientment. Avui en dia, els servidors basats en hardware estendard son les plataformes preferides per executar els servidors web, ja que són les plataformes amb millor relació rendiment/cost. El treball presentat en aquesta tesi esta dirigit a millorar la eficàcia en la gestió de recursos dels servidors web actuals. Per assolir els objectius d'aquesta tesis s'ha caracteritzat el funcionament dels servidors web en diverses entorns representatius, per tal de identificar el problemes i coll d'ampolla que limiten el rendiment del servidor web. Amb l'estudi dels servidors web s'ha identificat dos problemes principals que disminueixen l'eficiència dels servidors web en la utilització dels recursos hardware disponibles. El primer problema identificat és la evolució del protocol HTTP per incorporar connexions persistents i seguretat, que disminueix el rendiment e incrementa la complexitat de configuració dels servidors web. El segon problema és la naturalesa de algunes aplicacions web, les quals estan limitades per la memòria física o l'ample de banda amb el disc, que impedeix la correcta utilització dels recursos presents en les maquines multiprocessadors. Per solucionar aquests dos problemes dels servidors web hem proposat dues tècniques. En primer lloc, l'arquitectura hibrida, una evolució de l'arquitectura multi-threaded que es pot implementar fàcilment el els servidor web actuals i que millora notablement la gestió de les connexions i redueix la complexitat de configuració de tot el sistema. En segon lloc, hem implementat en el kernel del sistema operatiu Linux un comprensió de memòria principal per millorar el rendiment de les aplicacions que tenen la memòria com ha coll d'ampolla, millorant així la utilització dels recursos disponibles. Els resultats d'aquesta tesis estan avalats per una avaluació experimental exhaustiva que ha provat la efectivitat i viabilitat de les nostres propostes. Cal destacar que l'arquitectura de servidor web hybrida proposada en aquesta tesis ha estat implementada recentment per coneguts servidors web com és el cas de Apache, Tomcat i Glassfish. / The unstoppable growth of the World Wide Web requires a huge amount of computational resources that must be used efficiently. Nowadays, commodity hardware is the preferred platform to run web server systems because it is the most cost-effective solution. The work presented in this thesis aims to improve the efficiency of current web server systems, allowing the web servers to make the most of hardware resources. To this end, we first characterize current web server system and identify the problems that hinder web servers from providing an efficient utilization of resources. From the study of web servers in a wide range of situations and environments, we have identified two main issues that prevents web servers systems from efficiently using current hardware resources. The first is the extension of the HTTP protocol to include connection persistence and security, which dramatically impacts the performance and configuration complexity of traditional multi-threaded web servers. The second is the memory-bounded or disk-bounded nature of some web workloads that prevents the full utilization of the abundant CPU resources available on current commodity hardware. We propose two novel techniques to overcome the main problems with current web server systems. Firstly, we propose a Hybrid web serverarchitecture which can be easily implemented in any multi-threaded web server to improve CPU utilization so as to provide better management of client connections. And secondly, we describe a main memory compression technique implemented in the Linux operating system that makes optimum use of current multiprocessor's hardware, in order to improve the performance of memory bound web applications. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems. It is worth noting that the main concepts behind the Hybrid architecture have recently been implemented in popular web servers like Apache, Tomcat and Glassfish.
8

Performance Comparison of Uniprocessor and Multiprocessor Web Server Architectures

Harji, Ashif January 2010 (has links)
This thesis examines web-server architectures for static workloads on both uniprocessor and multiprocessor systems to determine the key factors affecting their performance. The architectures examined are event-driven (userver) and pipeline (WatPipe). As well, a thread-per-connection (Knot) architecture is examined for the uniprocessor system. Various workloads are tested to determine their effect on the performance of the servers. Significant effort is made to ensure a fair comparison among the servers. For example, all the servers are implemented in C or C++, and support sendfile and edge-triggered epoll. The existing servers, Knot and userver, are extended as necessary, and the new pipeline-server, WatPipe, is implemented using userver as its initial code base. Each web server is also tuned to determine its best configuration for a specific workload, which is shown to be critical to achieve best server performance. Finally, the server experiments are verified to ensure each is performing within reasonable standards. The performance of the various architectures is examined on a uniprocessor system. Three workloads are examined: no disk-I/O, moderate disk-I/O and heavy disk-I/O. These three workloads highlight the differences among the architectures. As expected, the experiments show the amount of disk I/O is the most significant factor in determining throughput, and once there is memory pressure, the memory footprint of the server is the crucial performance factor. The peak throughput differs by only 9-13% among the best servers of each architecture across the various workloads. Furthermore, the appropriate configuration parameters for best performance varied based on workload, and no single server performed the best for all workloads. The results show the event-driven and pipeline servers have equivalent throughput when there is moderate or no disk-I/O. The only difference is during the heavy disk-I/O experiments where WatPipe's smaller memory footprint for its blocking server gave it a performance advantage. The Knot server has 9% lower throughput for no disk-I/O and moderate disk-I/O and 13% lower for heavy disk-I/O, showing the extra overheads incurred by thread-per-connection servers, but still having performance close to the other server architectures. An unexpected result is that blocking sockets with sendfile outperforms non-blocking sockets with sendfile when there is heavy disk-I/O because of more efficient disk access. Next, the performance of the various architectures is examined on a multiprocessor system. Knot is excluded from the experiments as its underlying thread library, Capriccio, only supports uniprocessor execution. For these experiments, it is shown that partitioning the system so that server processes, subnets and requests are handled by the same CPU is necessary to achieve high throughput. Both N-copy and new hybrid versions of the uniprocessor servers, extended to support partitioning, are tested. While the N-copy servers performed the best, new hybrid versions of the servers also performed well. These hybrid servers have throughput within 2% of the N-copy servers but offer benefits over N-copy such as a smaller memory footprint and a shared address-space. For multiprocessor systems, it is shown that once the system becomes disk bound, the throughput of the servers is drastically reduced. To maximize performance on a multiprocessor, high disk throughput and lots of memory are essential.
9

Performance Comparison of Uniprocessor and Multiprocessor Web Server Architectures

Harji, Ashif January 2010 (has links)
This thesis examines web-server architectures for static workloads on both uniprocessor and multiprocessor systems to determine the key factors affecting their performance. The architectures examined are event-driven (userver) and pipeline (WatPipe). As well, a thread-per-connection (Knot) architecture is examined for the uniprocessor system. Various workloads are tested to determine their effect on the performance of the servers. Significant effort is made to ensure a fair comparison among the servers. For example, all the servers are implemented in C or C++, and support sendfile and edge-triggered epoll. The existing servers, Knot and userver, are extended as necessary, and the new pipeline-server, WatPipe, is implemented using userver as its initial code base. Each web server is also tuned to determine its best configuration for a specific workload, which is shown to be critical to achieve best server performance. Finally, the server experiments are verified to ensure each is performing within reasonable standards. The performance of the various architectures is examined on a uniprocessor system. Three workloads are examined: no disk-I/O, moderate disk-I/O and heavy disk-I/O. These three workloads highlight the differences among the architectures. As expected, the experiments show the amount of disk I/O is the most significant factor in determining throughput, and once there is memory pressure, the memory footprint of the server is the crucial performance factor. The peak throughput differs by only 9-13% among the best servers of each architecture across the various workloads. Furthermore, the appropriate configuration parameters for best performance varied based on workload, and no single server performed the best for all workloads. The results show the event-driven and pipeline servers have equivalent throughput when there is moderate or no disk-I/O. The only difference is during the heavy disk-I/O experiments where WatPipe's smaller memory footprint for its blocking server gave it a performance advantage. The Knot server has 9% lower throughput for no disk-I/O and moderate disk-I/O and 13% lower for heavy disk-I/O, showing the extra overheads incurred by thread-per-connection servers, but still having performance close to the other server architectures. An unexpected result is that blocking sockets with sendfile outperforms non-blocking sockets with sendfile when there is heavy disk-I/O because of more efficient disk access. Next, the performance of the various architectures is examined on a multiprocessor system. Knot is excluded from the experiments as its underlying thread library, Capriccio, only supports uniprocessor execution. For these experiments, it is shown that partitioning the system so that server processes, subnets and requests are handled by the same CPU is necessary to achieve high throughput. Both N-copy and new hybrid versions of the uniprocessor servers, extended to support partitioning, are tested. While the N-copy servers performed the best, new hybrid versions of the servers also performed well. These hybrid servers have throughput within 2% of the N-copy servers but offer benefits over N-copy such as a smaller memory footprint and a shared address-space. For multiprocessor systems, it is shown that once the system becomes disk bound, the throughput of the servers is drastically reduced. To maximize performance on a multiprocessor, high disk throughput and lots of memory are essential.
10

Design and Implementation of a Load Balancing Web Server Cluster

Tseng, Jin-Shan 02 September 2005 (has links)
The Internet has become popular and many traditional services have changed into web service stage by stage. The web server with single architecture is no longer satisfying a large number of user requests and the cluster-based web server architecture becomes another suitable solution. Dispatch mechanism play an important role in web server cluster and there are many load balancing policies have been proposed recently. But, these research has only simulation, performance of these policies operate in a real system is unknown. In these simulation all has an assumption that web traffic is heavy-tailed distribution. However, in our experience, the assumption has changed. Web content has become large because network bandwidth increasing and more and more large files like video¡Baudio and tail software, etc. coming in. We defined this web traffic is a data-intensive workload. In this study, we use a real and data-intensive web site to measure and compare these scheduling policies.

Page generated in 0.0637 seconds