1 |
Performance scalability of n-tier application in virtualized cloud environments: Two case studies in vertical and horizontal scalingPark, Junhee 27 May 2016 (has links)
The prevalence of multi-core processors with recent advancement in virtualization technologies has enabled horizontal and vertical scaling within a physical node achieving economical sharing of computing infrastructures as computing clouds. Through hardware virtualization, consolidated servers each with specific number of core allotment run on the same physical node in dedicated Virtual Machines (VMs) to increase overall node utilization which increases profit by reducing operational costs. Unfortunately, despite the conceptual simplicity of vertical and horizontal scaling in virtualized cloud environments, leveraging the full potential of this technology has presented significant scalability challenges in practice. One of the fundamental problems is the performance unpredictability in virtualized cloud environments (ranked fifth in the top 10 obstacles for growth of cloud computing). In this dissertation, we present two case studies in vertical and horizontal scaling to this challenging problem. For the first case study, we describe concrete experimental evidence that shows important source of performance variations: mapping of virtual CPU to physical cores. We then conduct an experimental comparative study of three major hypervisors (i.e., VMware, KVM, Xen) with regard to their support of n-tier applications running on multi-core processor. For the second case study, we present empirical study that shows memory thrashing caused by interference among consolidated VMs is a significant source of performance interference that hampers horizontal scalability of an n-tier application performance. We then execute transient event analyses of fine-grained experiment data that link very short bottlenecks with memory thrashing to the very long response time (VLRT) requests. Furthermore we provide three practical techniques such as VM migration, memory reallocation, soft resource allocation and show that they can mitigate the effects of performance interference among consolidate VMs.
|
2 |
An empirical approach to automated performance management for elastic n-tier applications in computing cloudsMalkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
|
3 |
An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in cloudsJayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments.
Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
|
4 |
Nätverkstopologi till N-Tier applikationer : För ökad säkerhet, flexibilitet, skalbarhet och enkelhetGhrissi, Hatem, Mando, Ibrahim January 2022 (has links)
Multinivåarkitekturer, eller N-nivåarkitekturer där 'N' refererar till antalet nivåer som börjar med 1, är ett sätt att strukturera nätverk och applikationer, antingen i '”on-premises”-infrastrukturer eller på molnplattformar, liksom inom infrastrukturer som kod (IaC). Dessa arkitekturer innebär att man delar upp nätverket eller applikationen i separata lager och nivåer. Varje lager har ett specifikt ansvarsområde och de högre lagren kan använda tjänster i de lägre lagren, men inte tvärtom. 'Nivåer' avser en fysisk separation av nätverket och applikationerna, där de olika nivåerna kan kommunicera med varandra via nätverkskommunikationspunkter där särskilda regler gäller. En nivå kan innehålla ett eller flera lager. Denna arkitektur kan ge hög säkerhet, flexibilitet, skalbarhet, enkelhet och stabilitet till en rimlig kostnad. Den här uppsatsen har tre huvuddelar. Den första delen fokuserar på en analys av multinivåarkitekturer, deras uppbyggnad och de för- och nackdelar de innebär. Dessutom jämför och diskuterar vi skillnaderna mellan N-tier-arkitektur och andra arkitekturer. I uppsatsens andra del granskar vi de miljöer, säkerhetsaspekter, kostnader, verktyg och plattformar där denna arkitektur kan implementeras. Den tredje och sista delen innefattar ett praktiskt arbete, där vi planerar, designar, implementerar och automatiserar en nätverksinfrastruktur för ett nyetablerat företag baserat på N-tier-arkitektur i en testsituation som skapas för detta ändamål. Uppsatsen avslutas med en redovisning av de praktiska resultaten, en diskussion om metoderna och samhällsaspekterna, samt rekommendationer och förslag till framtida arbete. Uppsatsen har resulterat i två huvudsakliga utfall: en detaljerad teoretisk förklaring och jämförelse av olika nätverksarkitekturer och deras tillämpningsområden, samt en mall för genomförandet av liknande projekt från idéstadiet till en automatiserad infrastruktur som kan integreras på vilken plattform som helst. / Multi-tier architectures, or N-tier architectures where “N” refers to the number of levels starting with 1, is a way to structure networks and applications, either in “on-premises” infrastructures or on cloud platforms, as well as in Infrastructure as Code (IaC). These architectures involve dividing the network or application into separate layers and levels. Each layer has a specific area of responsibility, and the higher layers can use services in the lower layers, but not vice versa. “Levels” refers to a physical separation of the network and applications, where the different levels can communicate with each other via network communication points where specific rules apply. A level can contain one or more layers. This architecture can provide high security, flexibility, scalability, simplicity, and stability at a reasonable cost. This thesis has three main parts. The first part focuses on an analysis of multi-tier architectures, their structure, and the advantages and disadvantages they offer. In addition, we compare and discuss the differences between N-tier architecture and other architectures. In the second part of the thesis, we examine the environments, security aspects, costs, tools, and platforms where this architecture can be implemented. The third and final part includes a practical work, where we plan, design, implement, and automate a network infrastructure for a newly established company based on an N-tier architecture in a test situation created for this purpose. The thesis concludes with a presentation of the practical results, a discussion of the methods and societal aspects, as well as recommendations and suggestions for future work. The thesis has resulted in two main outcomes: a detailed theoretical explanation and comparison of different network architectures and their areas of application, and a template for carrying out similar projects from the idea stage to an automated infrastructure that can be integrated on any platform
|
5 |
Implementation and evaluation of data persistence tools for temporal versioned data models / Implementation och utvärdering av persistensverktyg för temporala versionshanterade datamodellerKnutsson, Tor January 2009 (has links)
<p>The purpose of this thesis was to investigate different concepts and tools which could support the development of a middleware which persists a temporal and versioned relational data model in an enterprise environment. Further requirements for the target application was that changes to the data model had to be facilitated, so that a small change to the model would not result in changes in several files and application layers. Other requirements include permissioning and audit tracing. In the thesis the reader is presented with a comparison of a set of tools for enterprise development and object/relational mapping. One of the tools, a code generator, is chosen as a good candidate to match the requirements of the project. An implementation is presented, where the chosen tool is used. An XML-based language which is used to define a data model and to provide input data for the tool is presented. Other concepts concerning the implementation is then described in detail. Finally, the author discusses alternative solutions and future improvements.</p>
|
6 |
Web application development with .NET : 3-tier architectureDhali, Salle January 2012 (has links)
The reason for performing this project work is to develop a Web application for the Student Union of Mid Sweden University applying the modern and comprehensive Microsoft .NET framework platform architecture. At present, the existing web application is divided into several modules which are built of server‐side scripting language technique and an open source database. The customer would like to develop the entire web applications using the Microsoft development tools and technologies in order to determine the possible benefit which could be obtained in terms of cost, maintenance, flexibility and the security perspective issues and also in terms of user friendly interactions options for all the involving partners in an effective way. The primary aim for the project is to start building a bookstore module for the Students Union that is responsible for selling literature to the students at the University. The module will also be integrated into a database system into which an administrator, a member of staff working in the Student Union, will be able to add a new book when it arrives and also update or delete if necessary later on. In addition to this module application all the book’s details belong to a certain category viewable to the students. The other part of this project work is aiming at finding a pattern similar to the bookstore module in which ordinary users can authenticate them towards a database and be able to add their curriculum vitae data entry and update it at a later stage as required.
|
7 |
Implementation and evaluation of data persistence tools for temporal versioned data models / Implementation och utvärdering av persistensverktyg för temporala versionshanterade datamodellerKnutsson, Tor January 2009 (has links)
The purpose of this thesis was to investigate different concepts and tools which could support the development of a middleware which persists a temporal and versioned relational data model in an enterprise environment. Further requirements for the target application was that changes to the data model had to be facilitated, so that a small change to the model would not result in changes in several files and application layers. Other requirements include permissioning and audit tracing. In the thesis the reader is presented with a comparison of a set of tools for enterprise development and object/relational mapping. One of the tools, a code generator, is chosen as a good candidate to match the requirements of the project. An implementation is presented, where the chosen tool is used. An XML-based language which is used to define a data model and to provide input data for the tool is presented. Other concepts concerning the implementation is then described in detail. Finally, the author discusses alternative solutions and future improvements.
|
Page generated in 0.035 seconds