1 |
Design and Implementation of a Load Balancing Web Server ClusterTseng, Jin-Shan 02 September 2005 (has links)
The Internet has become popular and many traditional services have changed into web service stage by stage. The web server with single architecture is no longer satisfying a large number of user requests and the cluster-based web server architecture becomes another suitable solution. Dispatch mechanism play an important role in web server cluster and there are many load balancing policies have been proposed recently. But, these research has only simulation, performance of these policies operate in a real system is unknown. In these simulation all has an assumption that web traffic is heavy-tailed distribution. However, in our experience, the assumption has changed. Web content has become large because network bandwidth increasing and more and more large files like video¡Baudio and tail software, etc. coming in. We defined this web traffic is a data-intensive workload.
In this study, we use a real and data-intensive web site to measure and compare these scheduling policies.
|
2 |
System Support for Scalable, Reliable and Highly Manageable Internet ServicesLuo, Mon-Yen 13 September 2002 (has links)
The Internet is increasingly being used as basic infrastructure for a variety of services. A high-performance server system is the key to the success of all these Internet services. However, the explosive growth of Internet has resulted in heavy demands being placed on Internet servers and has raised great concerns in terms of performance, scalability and availability of the associated services. A monolithic server hosting a service is usually not sufficient to handle these challenges. Distributed server architecture, consisting of multiple heterogeneous computers that appears as a single high performance system, has proven a successful and cost effective alternative to meet these challenges. Consequently, more and more Internet service providers run their service on a cluster of servers, and this trend is accelerating.
The distributed server architecture is only an insufficient answer to the challenges faced by Internet service providers today. This thesis presents an integrated system for supporting scalable and highly reliable Internet services on the distributed server architecture. This system is composed consists of two major parts: Server Load Balancer and Distributed Server management System. The server load balancer can intelligently route the incoming requests to the appropriate server node. The Java-based management system can relieve administrator¡¥s burden on managing such a distributed server system. With these mechanisms, we can provide an integrated system to consolidate a group of heterogeneous computers to be a powerful, adaptive, reliable Internet server system.
|
3 |
Virtualizace a optimalizace IT infrastruktury ve společnosti / Virtualization and optimization of IT infrastructure in the companyLipták, Roman January 2019 (has links)
Master’s thesis deals with the use of virtualization and consolidation technologies in order to optimize IT infrastructure in a selected company. The analysis contains current state of IT infrastructure and requirements for future upgrade. The theoretical part contains description of technologies and procedures used in virtualization and consolidation. Subsequently, the proposal of optimization and expansion of IT equipment is created together with management, implementation and economic evaluation of the solution.
|
4 |
Design and Implementation a Web-based Learning System on Server ClusterHo, Jiun-Huei 22 July 2005 (has links)
This dissertation presents a scalable web framework leaning system, Web-based Learning System (WebLS), addressing the distance learning scenario. Since the speed popularity of the Internet infrastructure and World Wide Web Services that have become the most commonly used information platform and an important medium for education; and expand to the Web-based e-Learning model. The Web-based e-Learning is not subject to the boundary of time or space that has greatly enhanced the effectiveness of online distance learning.
The WebLS aims at bringing together the most promising web technologies and standards, in order to attain a scalability and highly availability online learning environment. Moreover, the scalable web framework includes a SCORM based learning management system (named LMS), a server cluster infrastructure, a learning content management service, an information and content repository (named LMS database), and an agent system supporting the innovative solutions taken to implement scalability, availability, portability, reusability, and standardization.
The WebLS can store and provide Web access portal to learning contents from teachers, voluntaries, and institutions that lack resources or expertise to offer curriculums over the Internet.
So, in the first we design and implement the web-based learning managemnt sytem, Learning Management System (LMS), which conform the e-Learning standard, SCORM 1.2 specification, that established by ADL, and satisfy the requirements of the basic functionality at online web-based learning. Besides, in point of the research topic of learning behavior analysis, we propose a study result for extracting better learning path, Experience Matrix System with Time Fragment Extraction (EMST), which can analyse the learner¡¦s study behavior in Web-based learing environment. Then the information is used to explore, analyse students¡¦ learning path in order to find out the suitable learning path for the more learners.
As masses of learners concurrently enter the learning system, the system is often unable to serve such a massive workload, particularly during peak periods of learning activity. We use the server-cluster architecture as a way to create scalable and highly available solutions. However, hosting a variety of learning contents from different owners on such a distributed server system faces new design and management problems and requires new solutions. This dissertation describes the research work we are pursuing for constructing a system to address the challenges faced by hosting learning content on a server farm environment.
|
5 |
Smart distributed processing technologies for hedge fund managementThayalakumar, Sinnathurai January 2017 (has links)
Distributed processing cluster design using commodity hardware and software has proven to be a technological breakthrough in the field of parallel and distributed computing. The research presented herein is the original investigation on distributed processing using hybrid processing clusters to improve the calculation efficiency of the compute-intensive applications. This has opened a new frontier in affordable supercomputing that can be utilised by businesses and industries at various levels. Distributed processing that uses commodity computer clusters has become extremely popular over recent years, particularly among university research groups and research organisations. The research work discussed herein addresses a bespoke-oriented design and implementation of highly specific and different types of distributed processing clusters with applied load balancing techniques that are well suited for particular business requirements. The research was performed in four phases, which are cohesively interconnected, to find a suitable solution using a new type of distributed processing approaches. The first phase is an implementation of a bespoke-type distributed processing cluster using an existing network of workstations as a calculation cluster based on a loosely coupled distributed process system design that has improved calculation efficiency of certain legacy applications. This approach has demonstrated how to design an innovative, cost-effective, and efficient way to utilise a workstation cluster for distributed processing. The second phase is to improve the calculation efficiency of the distributed processing system; a new type of load balancing system is designed to incorporate multiple processing devices. The load balancing system incorporates hardware, software and application related parameters to assigned calculation tasks to each processing devices accordingly. Three types of load balancing methods are tested, static, dynamic and hybrid, which each of them has their own advantages, and all three of them have further improved the calculation efficiency of the distributed processing system. The third phase is to facilitate the company to improve the batch processing application calculation time, and two separate dedicated calculation clusters are built using small form factor (SFF) computers and PCs as separate peer-to-peer (P2P) network based calculation clusters. Multiple batch processing applications were tested on theses clusters, and the results have shown consistent calculation time improvement across all the applications tested. In addition, dedicated clusters are built using SFF computers with reduced power consumption, small cluster size, and comparatively low cost to suit particular business needs. The fourth phase incorporates all the processing devices available in the company as a hybrid calculation cluster utilises various type of servers, workstations, and SFF computers to form a high-throughput distributed processing system that consolidates multiple calculations clusters. These clusters can be utilised as multiple mutually exclusive multiple clusters or combined as a single cluster depending on the applications used. The test results show considerable calculation time improvements by using consolidated calculation cluster in conjunction with rule-based load balancing techniques. The main design concept of the system is based on the original design that uses first principle methods and utilises existing LAN and separate P2P network infrastructures, hardware, and software. Tests and investigations conducted show promising results where the company's legacy applications can be modified and implemented with different types of distributed processing clusters to achieve calculation and processing efficiency for various applications within the company. The test results have confirmed the expected calculation time improvements in controlled environments and show that it is feasible to design and develop a bespoke-type dedicated distributed processing cluster using existing hardware, software, and low-cost SFF computers. Furthermore, a combination of bespoke distributed processing system with appropriate load balancing algorithms has shown considerable calculation time improvements for various legacy and bespoke applications. Hence, the bespoke design is better suited to provide a solution for the calculation of time improvements for critical problems currently faced by the sponsoring company.
|
6 |
Návrh a implementace SAP CAR s automatickým zotavením z havárie / Design and Implementation of SAP CAR with Automatic Disaster Recovery FunctionSvitálek, Petr January 2017 (has links)
The main purpose of this master's thesis is to propose and implement a SAP CAR (customer activity repository) solution as an application into the current enterprise information system. Following the analysis of the existing system, there will be a solution created, which is fulfilling the customer demands and at the same time it is feasible under given conditions and in the current environment. This project is closely related with addressing the disaster recovery issue for making the system highly available. The final design is implemented, tested and then handed over to the customer.
|
Page generated in 0.0686 seconds