Spelling suggestions: "subject:"eeb servers."" "subject:"beb servers.""
31 |
"Avaliação de desempenho com algoritmos de escalonamento em clusters de servidores Web" / Performance Evaluation of scheduling algorithms in Web clustersCaio Peres Sabo 13 June 2006 (has links)
O surgimento de novos serviços e aplicações baseados na Web tem provocado um aumento desenfreado na quantidade de usuários da World Wide Web que, por sua vez, se torna cada vez mais popular no mundo dos negócios. Sites de e-commerce, que demandam grande tráfego de requisições, têm adotado sistemas de servidores Web distribuídos, como a arquitetura Web Cluster. Isso se deve ao fato de enfrentarem frequentemente situações de sobrecarga, durante as quais podem deixar de atender requisições de transação (com grande probabilidade de gerar renda) por conta do aumento na demanda de requisições de navegação (geram renda apenas de forma indireta). A utilização ineficiente de recursos pode comprometer o desempenho do sistema e é nesse contexto que este trabalho se insere. Neste trabalho foi desenvolvido um modelo de Servidor Web para E-Commerce (SWE-C),validado por meio de um modelo de simulação e uma carga sintética gerada a partir de um modelo desenvolvido com os principais tipos de requisições que caracterizam um site de e-commerce. Foram realizadas simulações no sistema com diversas combinações de algoritmos de escalonamento e disciplinas de atendimentos para filas, dentre as quais de destaca uma nova disciplina que utiliza um mecanismo de prioridades orientado ao consumo de CPU proposto neste trabalho. O objetivo é aumentar o throughput de requisições de transação e melhorar os tempos de resposta em situações de sobrecarga. Uma avaliação de desempenho foi realizado e constatou-se que o mecanismo de prioridades proposto é adequado às necessidades de um site de e-commerce. / The appearance of new services and applications based on the web has causing a wild increase in the amount of users of the World Wide Web that becomes more popular in the business world. E-commerce sites that demand great request traffic has adopted a distributed web servers system, such as the Web Cluster. This occurs because this sites frequently are on overloaded situations, leaving to serve request transactions (with large probability of generate income) due to increasing demand of navigation requests (generates income only indirectly). The low resource utilization can compromise the system performance. This work is inserted in this context. In this master thesis has been developed a Web server model for E-Commerce (WSE-C). It is validated with a simulation model using a synthetic workload characterized with the main types of equest identified from e-commerce sites. Several simulations were accomplished in the system, combining the scheduling algorithms, the queue attendances disciplines and the use of a new priority mechanism oriented to the CPU utilization. The aim of this work is increase the request throughput and to obtain a better response time on overload situation. A performance evaluation was conducted and shown that priority mechanisms is adequate to a e-commerce site.
|
32 |
Performance Analysis of Offloading Application-Layer Tasks to Network ProcessorsMahadevan, Soumya 01 January 2007 (has links) (PDF)
Offloading tasks to a network processor is one of the important ways to increase server performance. Hardware offloading of Transmission Control Protocol/Internet Protocol (TCP/IP) intensive tasks is known to significantly improve performance. When the entire application is considered for offloading, the impact on the server can be significant because it significantly reduces the load on the server. The goal of this thesis is to consider such a system with application-level offloading, rather than hardware offloading, and gauge its performance benefits.
I am implementing this project on an Apache httpd server (running RedHat Linux), on a system that utilizes a co-located network processor system (IXP2855). The performance of the two implementations is measured using the SPECweb2005 benchmark, which is the accepted industry standard for evaluating Web server performance.
|
33 |
A Priority-Based Admission Control Scheme for Commercial Web ServersNafea, Ibtehal T., Younas, M., Holton, Robert, Awan, Irfan U. January 2014 (has links)
No / This paper investigates into the performance and load management of web servers that are deployed in commercial websites. Such websites offer various services such as flight/hotel booking, online banking, stock trading, and product purchases among others. Customers are increasingly relying on these round-the-clock services which are easier and (generally) cheaper to order. However, such an increasing number of customers' requests makes a greater demand on the web servers. This leads to web servers' overload and the consequential provisioning of inadequate level of service. This paper addresses these issues and proposes an admission control scheme which is based on the class-based priority scheme that classifies customer's requests into different classes. The proposed scheme is formally specified using -calculus and is implemented as a Java-based prototype system. The prototype system is used to simulate the behaviour of commercial website servers and to evaluate their performance in terms of response time, throughput, arrival rate, and the percentage of dropped requests. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority requests but without causing adverse effects on low priority requests.
|
34 |
A memory-based load balancing technique for distributed web serversBennur, Harsha 01 January 1998 (has links)
No description available.
|
35 |
TCP Connection Management Mechanisms for Improving Internet Server PerformanceShukla, Amol January 2005 (has links)
This thesis investigates TCP connection management mechanisms in order to understand the behaviour and improve the performance of Internet servers during overload conditions such as flash crowds. We study several alternatives for implementing TCP connection establishment, reviewing approaches taken by existing TCP stacks as well as proposing new mechanisms to improve server throughput and reduce client response times under overload. We implement some of these connection establishment mechanisms in the Linux TCP stack and evaluate their performance in a variety of environments. We also evaluate the cost of supporting half-closed connections at the server and assess the impact of an abortive release of connections by clients on the throughput of an overloaded server. Our evaluation demonstrates that connection establishment mechanisms that eliminate the TCP-level retransmission of connection attempts by clients increase server throughput by up to 40% and reduce client response times by two orders of magnitude. Connection termination mechanisms that preclude support for half-closed connections additionally improve server throughput by up to 18%.
|
36 |
Políticas de atendimento para servidores Web com serviços diferenciados baseadas nas características das requsições / Atttendance politics for Web servers with differentiated services based on the features of Web requestsTraldi, Ottone Alexandre 12 December 2008 (has links)
Este trabalho propõe mecanismos de diferenciação de serviços para servidores Web, visando a melhorar o desempenho desses sistemas quando são consideradas as características das requisições Web nas políticas de atendimento. Optou-se por adotar o contexto do comércio eletrônico para a realização das pesquisas, uma vez que esse ambiente é um dos mais impactados negativamente quando há um comportamento inadequado do servidor em situações de sobrecarga. Para isso, foi realizada uma investigação das características das requisições Web típicas do e-commerce, para que tais características pudessem ser usadas como diretrizes para os mecanismos e melhorar o desempenho dos servidores. Em seguida, foram propostos um modelo de carga de trabalho e um modelo de simulação para a realização dos experimentos. Com isso, foi possível avaliar os resultados obtidos com a inserção dos diversos mecanismos no Servidor Web com Diferenciação de Serviços (SWDS), um modelo de servidor cuja arquitetura o torna capaz de fornecer serviços diferenciados a seus usuários e aplicações. Foram propostos novos mecanismos de escalonamento de requisições bem como novos mecanismos de controle de admissão. Diversas simulações foram realizadas e os resultados obtidos mostram que a exploração das características das requisições Web, além de ser fundamental para um bom entendimento do comportamento do servidor, possibilita a melhoria de desempenho do sistema / This work proposes differentiated services mechanisms for Web servers, aiming at improving their performance when the features of Web requests are considered. The electronic commerce (e-commerce) context was adopted to develop the researches once this environment is one of the most negatively influenced when there is an inadequate behavior of the server under overload situations. Thus, it was realized an investigation on the features of ecommerce Web requests, so that these features could be used both as guidelines for the mechanisms and to improve the performance of the servers. Afterwards, a workload model and a simulation model were proposed to implement the experiments. Thus, it was possible to evaluate the results obtained with the insertion of several mechanisms in the Web Server with Differentiated Services (WSDS), a server model with an architecture that makes it capable of supplying differentiated services to its users and applications. New request scheduling mechanisms were proposed as well as new mechanisms for admission control. Several simulations were realized and the obtained results show that the exploration of the Web request features, besides being fundamental for a good understanding of the server behavior, makes possible to improve the system performance
|
37 |
A Study of Replicated and Distributed Web ContentJohn, Nitin Abraham 10 August 2002 (has links)
"
With the increase in traffic on the web, popular web sites get a large number of requests. Servers at these sites are sometimes unable to handle the large number of requests and clients to such sites experience long delays. One approach to overcome this problem is the distribution or replication of content over multiple servers. This approach allows for client requests to be distributed to multiple servers.
Several techniques have been suggested to direct client requests to multiple servers. We discuss these techniques. With this work we hope to study the extent and method of content replication and distribution at web sites. To understand the distribution and replication of content we ran client programs to retrieve headers and bodies of web pages and observed the changes in them over multiple requests. We also hope to understand possible problems that could face clients to such sites due to caching and standardization of newer protocols like HTTP/1.1. The main contribution of this work is to understand the actual implementation of replicated and distributed content on multiple servers and its implication for clients.
Our investigations showed issues with replicated and distributed content and its effects on caching due to incorrect identifers being send by different servers serving the same content. We were able to identify web sites doing application layer switching mechanisms like DNS and HTTP redirection. Lower layers of switching needed investigation of the HTTP responses from servers, which were hampered by insuffcient tags send by servers. We find web sites employ a large amount of distribution of embedded content and its ramifcations on HTTP/1.1 need further investigation. "
|
38 |
Adaptive Multimedia Content Delivery for Scalable Web ServersPradhan, Rahul 02 May 2001 (has links)
The phenomenal growth in the use of the World Wide Web often places a heavy load on networks and servers, threatening to increase Web server response time and raising scalability issues for both the network and the server. With the advances in the field of optical networking and the increasing use of broadband technologies like cable modems and DSL, the server and not the network, is more likely to be the bottleneck. Many clients are willing to receive a degraded, less resource intensive version of the requested content as an alternative to connection failures. In this thesis, we present an adaptive content delivery system that transparently switches content depending on the load on the server in order to serve more clients. Our system is designed to work for dynamic Web pages and streaming multimedia traffic, which are not currently supported by other adaptive content approaches. We have designed a system which is capable of quantifying the load on the server and then performing the necessary adaptation. We designed a streaming MPEG server and client which can react to the server load by scaling the quality of frames transmitted. The main benefits of our approach include: transparent content switching for content adaptation, alleviating server load by a graceful degradation of server performance and no requirement of modification to existing server software, browsers or the HTTP protocol. We experimentally evaluate our adaptive server system and compare it with an unadaptive server. We find that adaptive content delivery can support as much as 25% more static requests, 15% more dynamic requests and twice as many multimedia requests as a non-adaptive server. Our, client-side experiments performed on the Internet show that the response time savings from our system are quite significant.
|
39 |
Understanding Flaws in the Deployment and Implementation of Web EncryptionSivakorn, Suphannee January 2018 (has links)
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations.
In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks.
Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage.
Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks.
|
40 |
A pattern-driven process for secure service-oriented applicationsUnknown Date (has links)
During the last few years, Service-Oriented Architecture (SOA) has been considered to be the new phase in the evolution of distributed enterprise applications. Even though there is a common acceptance of this concept, a real problem hinders the widespread use of SOA : A methodology to design and build secure service-oriented applications is needed. In this dissertation, we design a novel process to secure service-oriented applications. Our contribution is original not only because it applies the MDA approach to the design of service-oriented applications but also because it allows their securing by dynamically applying security patterns throughout the whole process. Security patterns capture security knowledge and describe security mechanisms. In our process, we present a structured map of security patterns for SOA and web services and its corresponding catalog. At the different steps of a software lifecycle, the architect or designer needs to make some security decisions. / An approach using a decision tree made of security pattern nodes is proposed to help making these choices. We show how to extract a decision tree from our map of security patterns. Model-Driven Architecture (MDA) is an approach which promotes the systematic use of models during a system's development lifecycle. In the dissertation we describe a chain of transformations necessary to obtain secure models of the service-oriented application. A main benefit of this process is that it decouples the application domain expertise from the security expertise that are both needed to build a secure application. Security knowledge is captured by pre-defined security patterns, their selection is rendered easier by using the decision trees and their application can be automated. A consequence is that the inclusion of security during the software development process becomes more convenient for the architects/designers. / A second benefit is that the insertion of security is semi-automated and traceable. Thus, the process is flexible and can easily adapt to changing requirements. Given that SOA was developed in order to provide enterprises with modular, reusable and adaptable architectures, but that security was the principal factor that hindered its use, we believe that our process can act as an enabler for service-oriented applications. / by Nelly A. Delessy. / Thesis (Ph.D.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, FL : 2008 Mode of access: World Wide Web.
|
Page generated in 0.0657 seconds