Spelling suggestions: "subject:"verver"" "subject:"cerver""
51 |
Improving web server efficiency on commodity hardwareBeltrán Querol, Vicenç 03 October 2008 (has links)
El ràpid creixement de la Web requereix una gran quantitat de recursos computacionals que han de ser utilitzats eficientment. Avui en dia, els servidors basats en hardware estendard son les plataformes preferides per executar els servidors web, ja que són les plataformes amb millor relació rendiment/cost. El treball presentat en aquesta tesi esta dirigit a millorar la eficàcia en la gestió de recursos dels servidors web actuals. Per assolir els objectius d'aquesta tesis s'ha caracteritzat el funcionament dels servidors web en diverses entorns representatius, per tal de identificar el problemes i coll d'ampolla que limiten el rendiment del servidor web. Amb l'estudi dels servidors web s'ha identificat dos problemes principals que disminueixen l'eficiència dels servidors web en la utilització dels recursos hardware disponibles. El primer problema identificat és la evolució del protocol HTTP per incorporar connexions persistents i seguretat, que disminueix el rendiment e incrementa la complexitat de configuració dels servidors web. El segon problema és la naturalesa de algunes aplicacions web, les quals estan limitades per la memòria física o l'ample de banda amb el disc, que impedeix la correcta utilització dels recursos presents en les maquines multiprocessadors. Per solucionar aquests dos problemes dels servidors web hem proposat dues tècniques. En primer lloc, l'arquitectura hibrida, una evolució de l'arquitectura multi-threaded que es pot implementar fàcilment el els servidor web actuals i que millora notablement la gestió de les connexions i redueix la complexitat de configuració de tot el sistema. En segon lloc, hem implementat en el kernel del sistema operatiu Linux un comprensió de memòria principal per millorar el rendiment de les aplicacions que tenen la memòria com ha coll d'ampolla, millorant així la utilització dels recursos disponibles. Els resultats d'aquesta tesis estan avalats per una avaluació experimental exhaustiva que ha provat la efectivitat i viabilitat de les nostres propostes. Cal destacar que l'arquitectura de servidor web hybrida proposada en aquesta tesis ha estat implementada recentment per coneguts servidors web com és el cas de Apache, Tomcat i Glassfish. / The unstoppable growth of the World Wide Web requires a huge amount of computational resources that must be used efficiently. Nowadays, commodity hardware is the preferred platform to run web server systems because it is the most cost-effective solution. The work presented in this thesis aims to improve the efficiency of current web server systems, allowing the web servers to make the most of hardware resources. To this end, we first characterize current web server system and identify the problems that hinder web servers from providing an efficient utilization of resources. From the study of web servers in a wide range of situations and environments, we have identified two main issues that prevents web servers systems from efficiently using current hardware resources. The first is the extension of the HTTP protocol to include connection persistence and security, which dramatically impacts the performance and configuration complexity of traditional multi-threaded web servers. The second is the memory-bounded or disk-bounded nature of some web workloads that prevents the full utilization of the abundant CPU resources available on current commodity hardware. We propose two novel techniques to overcome the main problems with current web server systems. Firstly, we propose a Hybrid web serverarchitecture which can be easily implemented in any multi-threaded web server to improve CPU utilization so as to provide better management of client connections. And secondly, we describe a main memory compression technique implemented in the Linux operating system that makes optimum use of current multiprocessor's hardware, in order to improve the performance of memory bound web applications. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems. It is worth noting that the main concepts behind the Hybrid architecture have recently been implemented in popular web servers like Apache, Tomcat and Glassfish.
|
52 |
Performance Comparison of Uniprocessor and Multiprocessor Web Server ArchitecturesHarji, Ashif January 2010 (has links)
This thesis examines web-server architectures for static workloads on both uniprocessor and multiprocessor systems to determine the key factors affecting their performance. The architectures examined are event-driven (userver) and pipeline (WatPipe). As well, a thread-per-connection (Knot) architecture is examined for the uniprocessor system. Various workloads are tested to determine their effect on the performance of the servers. Significant effort is made to ensure a fair comparison among the servers. For example, all the servers are implemented in C or C++, and support sendfile and edge-triggered epoll.
The existing servers, Knot and userver, are extended as necessary, and the new pipeline-server, WatPipe, is implemented using userver as its initial code base. Each web server is also tuned to determine its best configuration for a specific workload, which is shown to be critical to achieve best server performance. Finally, the server experiments are verified to ensure each is performing within reasonable standards.
The performance of the various architectures is examined on a uniprocessor system. Three workloads are examined: no disk-I/O, moderate disk-I/O and heavy disk-I/O. These three workloads highlight the differences among the architectures. As expected, the experiments show the amount of disk I/O is the most significant factor in determining throughput, and once there is memory pressure, the memory footprint of the server is the crucial performance factor. The peak throughput differs by only 9-13% among the best servers of each architecture across the various workloads. Furthermore, the appropriate configuration parameters for best performance varied based on workload, and no single server performed the best for all workloads. The results show the event-driven and pipeline servers have equivalent throughput when there is moderate or no disk-I/O. The only difference is during the heavy disk-I/O experiments where WatPipe's smaller memory footprint for its blocking server gave it a performance advantage. The Knot server has 9% lower throughput for no disk-I/O and moderate disk-I/O and 13% lower for heavy disk-I/O, showing the extra overheads incurred by thread-per-connection servers, but still having performance close to the other server architectures.
An unexpected result is that blocking sockets with sendfile outperforms non-blocking sockets with sendfile when there is heavy disk-I/O because of more efficient disk access.
Next, the performance of the various architectures is examined on a multiprocessor system. Knot is excluded from the experiments as its underlying thread library, Capriccio, only supports uniprocessor execution. For these experiments, it is shown that partitioning the system so that server processes, subnets and requests are handled by the same CPU is necessary to achieve high throughput. Both N-copy and new hybrid versions of the uniprocessor servers, extended to support partitioning, are tested. While the N-copy servers performed the best, new hybrid versions of the servers also performed well.
These hybrid servers have throughput within 2% of the N-copy servers but offer benefits over N-copy such as a smaller memory footprint and a shared address-space.
For multiprocessor systems, it is shown that once the system becomes disk bound, the throughput of the servers is drastically reduced. To maximize performance on a multiprocessor, high disk throughput and lots of memory are essential.
|
53 |
PDF Eagle : A PDF viewer in QtGustavsson, Lukas January 2012 (has links)
To keep up in the rapidly changing market for smart mobile phones, newways of consuming information is needed. In this master thesis project aPortable Document Format (PDF) viewer with more features than existingPDF viewers for Symbian^3 was developed, called PDF Eagle. PDF Eaglewas implemented using the Qt framework, allowing it to be easily ported todierent platforms. PDF documents have a rich structure and to be fullycompatible with the standard and at the same time responsive enough to berun on a mobile platform is a formidable technical challenge. This reportdescribes the issues that had to be resolved all the way to a functioning "app"that was marketed on the Nokia market in October 2011 with a great success.Among the technical challenges was a way to correctly render coloured objectsin PDFs. A gradient is a way to colour an area in a PDF le. Results of testsshowed that PDF Eagle is more capable of handling gradients, shadows andencrypted PDF les compared to other mobile PDF viewers. The conclusion ofthis report is that PDF Eagle is on par with or outmatches other PDF viewerson the targeted platform. This work also shows the feasibility of incrementallydownloading the pages of a PDF le which provides a better user experienceby faster viewing.
|
54 |
Performance Comparison of Uniprocessor and Multiprocessor Web Server ArchitecturesHarji, Ashif January 2010 (has links)
This thesis examines web-server architectures for static workloads on both uniprocessor and multiprocessor systems to determine the key factors affecting their performance. The architectures examined are event-driven (userver) and pipeline (WatPipe). As well, a thread-per-connection (Knot) architecture is examined for the uniprocessor system. Various workloads are tested to determine their effect on the performance of the servers. Significant effort is made to ensure a fair comparison among the servers. For example, all the servers are implemented in C or C++, and support sendfile and edge-triggered epoll.
The existing servers, Knot and userver, are extended as necessary, and the new pipeline-server, WatPipe, is implemented using userver as its initial code base. Each web server is also tuned to determine its best configuration for a specific workload, which is shown to be critical to achieve best server performance. Finally, the server experiments are verified to ensure each is performing within reasonable standards.
The performance of the various architectures is examined on a uniprocessor system. Three workloads are examined: no disk-I/O, moderate disk-I/O and heavy disk-I/O. These three workloads highlight the differences among the architectures. As expected, the experiments show the amount of disk I/O is the most significant factor in determining throughput, and once there is memory pressure, the memory footprint of the server is the crucial performance factor. The peak throughput differs by only 9-13% among the best servers of each architecture across the various workloads. Furthermore, the appropriate configuration parameters for best performance varied based on workload, and no single server performed the best for all workloads. The results show the event-driven and pipeline servers have equivalent throughput when there is moderate or no disk-I/O. The only difference is during the heavy disk-I/O experiments where WatPipe's smaller memory footprint for its blocking server gave it a performance advantage. The Knot server has 9% lower throughput for no disk-I/O and moderate disk-I/O and 13% lower for heavy disk-I/O, showing the extra overheads incurred by thread-per-connection servers, but still having performance close to the other server architectures.
An unexpected result is that blocking sockets with sendfile outperforms non-blocking sockets with sendfile when there is heavy disk-I/O because of more efficient disk access.
Next, the performance of the various architectures is examined on a multiprocessor system. Knot is excluded from the experiments as its underlying thread library, Capriccio, only supports uniprocessor execution. For these experiments, it is shown that partitioning the system so that server processes, subnets and requests are handled by the same CPU is necessary to achieve high throughput. Both N-copy and new hybrid versions of the uniprocessor servers, extended to support partitioning, are tested. While the N-copy servers performed the best, new hybrid versions of the servers also performed well.
These hybrid servers have throughput within 2% of the N-copy servers but offer benefits over N-copy such as a smaller memory footprint and a shared address-space.
For multiprocessor systems, it is shown that once the system becomes disk bound, the throughput of the servers is drastically reduced. To maximize performance on a multiprocessor, high disk throughput and lots of memory are essential.
|
55 |
Design and Analysis of a Highly Efficient File Server GroupLiu, Feng-jung 29 January 2005 (has links)
The IT community has increasingly come to view storage as a resource that should be shared among computer systems and managed independently of the computer systems that it serves. And, the explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. Therefore, the ways to improve the reliability and availability of system, to achieve the expected reduction in operational expenses and to reduce the operations of system management of system have become essential issues. A basic technique for improving reliability of a file system is to mask the effects of failures through replication. Consistency control protocols are implemented to ensure the consistency among these replicas.
In this dissertation, we leveraged the concept of intermediate file handle to cover the heterogeneity of file system. But, the monolithic server system suffered from the poor system utilization due to the lack of dependence checking between writes and management of out-of-ordered requests. Hence, in this dissertation, we followed the concept of intermediate file handle and proposed an efficient data consistency control scheme, which attempts to eliminate unnecessary waits for independent NFS writes to improve the efficiency of file server group. In addition, we also proposed a simple load-sharing mechanism for NFS client to improve system throughput and the utilization of duplicates. Finally, the results of experiments proved the efficiency of the proposed consistency control mechanism and load-sharing policy. Above all, easy to implement is our main design consideration.
|
56 |
Design and Implementation of a Load Balancing Web Server ClusterTseng, Jin-Shan 02 September 2005 (has links)
The Internet has become popular and many traditional services have changed into web service stage by stage. The web server with single architecture is no longer satisfying a large number of user requests and the cluster-based web server architecture becomes another suitable solution. Dispatch mechanism play an important role in web server cluster and there are many load balancing policies have been proposed recently. But, these research has only simulation, performance of these policies operate in a real system is unknown. In these simulation all has an assumption that web traffic is heavy-tailed distribution. However, in our experience, the assumption has changed. Web content has become large because network bandwidth increasing and more and more large files like video¡Baudio and tail software, etc. coming in. We defined this web traffic is a data-intensive workload.
In this study, we use a real and data-intensive web site to measure and compare these scheduling policies.
|
57 |
Design and Implement an Efficient Web Application ServerWu, Jr-Houng 01 August 2000 (has links)
Web application servers are rapidly becoming the essential resources for competitive advantage, because e-businesses can gain amount of revenues. However, benefits are coming from consumers, more consumers cause more benefits. So web application server, which can provide more efficient services, will attract more people. To gain the biggest advantage, we research in Internet to find out a new method. A web application server means it will have frequent transactions. And it will access data in database and produce new web pages to respond consumers through the CGI (Common Gateway Interface) programs. In this paper, we present a new method that can improve performance of web application server through saving network¡¦s bandwidth, reducing web server loading and cutting wait-time of end user. The method primarily uses divided dedicated data and common information of the source web page. With saving the multiple download time of the common information, the efficiency of network¡¦s bandwidth and the user waiting time can be improved effectively. The method also can be easily combined with other existed technologies that can solve Internet efficiency problems. And we developed an approach to design and implement an efficient web application server. Finally, we measure the result of the traditional method and the new method to prove that our method really can save network¡¦s bandwidth, reduce web server loading and cut wait-time of end user. We hope the method can improve general existed problems in Internet.
|
58 |
Distributed Power Control and Monitoring System with Internet IntegrationWang, Long-Cheng 28 June 2002 (has links)
With the rapid development of Internet and computer technology, users can already have many applications with remote Control and Monitoring ¡]CM¡^capability, including the CM on hardware power devices. Form the state-of-the-art Internet Technology the server-side computer can be used to integrate all client-side CM devices with distributive process and unified management to realize the development of distributed power CM system with Internet integration, which is also the goal of this thesis.
The Internet based distributed power control and monitoring system was proposed in this thesis by the use of 8051 Micro-controller to develop a low cost, high stability, and user-friendly CM devices for home and factory uses. LabVIEW language was used to develop the Man-Machine Interface (MMI), and DataSocket tool was used to share the information on net. The MMI¡¦s designed for the CM devices are Multi-channel digital Input/Output acquisition system, Voltage/Current acquisition system and Temperature acquisition System.
In this thesis, we used newly developed Internet technology and standards, also developed ActiveX control by using programming-language tools. So the ActiveX control will be embedded in the web browsers. Using ASP (Active Server Page) and Dynamic HTML for development, this research also built a web database system on Internet.
|
59 |
System Support for Scalable, Reliable and Highly Manageable Internet ServicesLuo, Mon-Yen 13 September 2002 (has links)
The Internet is increasingly being used as basic infrastructure for a variety of services. A high-performance server system is the key to the success of all these Internet services. However, the explosive growth of Internet has resulted in heavy demands being placed on Internet servers and has raised great concerns in terms of performance, scalability and availability of the associated services. A monolithic server hosting a service is usually not sufficient to handle these challenges. Distributed server architecture, consisting of multiple heterogeneous computers that appears as a single high performance system, has proven a successful and cost effective alternative to meet these challenges. Consequently, more and more Internet service providers run their service on a cluster of servers, and this trend is accelerating.
The distributed server architecture is only an insufficient answer to the challenges faced by Internet service providers today. This thesis presents an integrated system for supporting scalable and highly reliable Internet services on the distributed server architecture. This system is composed consists of two major parts: Server Load Balancer and Distributed Server management System. The server load balancer can intelligently route the incoming requests to the appropriate server node. The Java-based management system can relieve administrator¡¥s burden on managing such a distributed server system. With these mechanisms, we can provide an integrated system to consolidate a group of heterogeneous computers to be a powerful, adaptive, reliable Internet server system.
|
60 |
Scalable analysis and design of service systemsZhang, Bo 29 March 2011 (has links)
In this dissertation, we develop analytical and computational tools for performance analysis and design of large-scale service systems. The dissertation consists of three main chapters.
The first chapter is devoted to devising efficient task assignment policies for large-scale service system models from a rare event analysis standpoint. Specifically, we study the steady-state behavior of multi-server queues with general job size distributions under size-interval task assignment (SITA) policies. Assuming Poisson arrivals and the existence of the alpha-th moment of the job size distribution for some alpha> 1, we show that if the job arrival rate and the number of servers increase to infinity with the traffic intensity held fixed, a SITA policy parameterized by alpha minimizes in a large deviation sense the steady-state probability that the total number of jobs in the system is greater than or equal to the number of servers. The optimal large deviation decay rate can be arbitrarily close to the one for the corresponding probability in an infinite-server queue, which only depends on the system traffic intensity but not on any higher moments of the job size distribution. This supports in a many-server asymptotic framework the common wisdom that separating large jobs from small jobs protects system performance against job size variability.
In the second chapter, we study constraint satisfaction problems for a Markovian parallel-server queueing model with impatient customers, motivated by large telephone call centers. To minimize the staffing level subject to different service-level constraints, we propose refined square-root staffing (SRS) rules, which preserve the insightfulness and computational scalability of the celebrated SRS principle and yet achieve a stronger form of optimality. In particular, using asymptotic series expansion techniques, we first develop refinements to a set of asymptotic performance approximations recently used in analyzing large call centers, namely, the Quality and Efficiency Driven (QED) diffusion approximations. We then use the improved performance approximations to explicitly characterize the error of conventional SRS and further obtain the refined SRS rules. Finally, we demonstrate how the explicit form of the staffing refinements enables an analytical assessment of the accuracy of conventional SRS and its underlying QED approximation.
In the third chapter, we study a fluid model for many-server Markovian queues in changing environments, which can be used to model large-scale service systems with customer abandonments and time-varying arrivals. We obtain the stationary distribution of the fluid model, which refines and is shown to converge, as the environment changing rate vanishes in a proper way, to a simple discrete bimodal approximation. We also prove that the fluid model arises as a law of large number limit in a many-server asymptotic regime.
|
Page generated in 0.0282 seconds