Spelling suggestions: "subject:"hypervisors"" "subject:"supervisors""
1 |
On Improving the Security of Virtualized Systems through Unikernelized Driver Domain and Virtual Machine Monitor Compartmentalization and SpecializationMehrab, A. K. M. Fazla 31 March 2023 (has links)
Virtualization is the backbone of cloud infrastructures. Its core subsystems include hypervisors and virtual machine monitors (VMMs). They ensure the isolation and security of co-existent virtual machines (VMs) running on the same physical machine. Traditionally, driver domains -- isolated VMs in a hypervisor such as Xen that run device drivers -- use general-purpose full-featured OSs (e.g., Linux), which has a large attack surface, evident by the increasing number of their common vulnerabilities and exposures (CVEs). We argue for using the unikernel operating system (OS) model for driver domains. In this model, a single application is statically compiled together with the minimum necessary kernel code and libraries to produce a single address-space image, reducing code size by as much as one order of magnitude, which yields security benefits.
We develop a driver domain OS, called Kite, using NetBSD OS's rumprun unikernel. Since rumprun is directly based on NetBSD's code, it allows us to leverage NetBSD's large collection of device drivers, including highly specialized ones such as Amazon ENA. Kite's design overcomes several significant challenges including Xen's limited para-virtualization (PV) I/O support in rumprun, lack of Xen backend drivers which prevents rumprun from being used as a driver domain OS, and NetBSD's lack of support for running driver domains in Xen. We instantiate Kite for the two most widely used I/O devices, storage and network, by designing and implementing the storage backend and network backend drivers. Our evaluations reveal that Kite achieves competitive performance to a Linux-based driver domain while using 10x fewer system calls, mitigates a set of CVEs, and retains all the benefits of unikernels including a reduced number of return-oriented programming (ROP) gadgets and advanced gadget-related metrics.
General-purpose VMMs include a large number of components that may not be used in many VM configurations, resulting in a large attack surface. In addition, they lack intra-VMM isolation, which degrades security: vulnerabilities in one VMM component can be exploited to compromise other components or that of the host OS and other VMs (by privilege escalation). To mitigate these security challenges, we develop principles for VMM compartmentalization and specialization. We construct a prototype, called Redwood, embodying those principles. Redwood is built by extending Cloud Hypervisor and compartmentalizes thirteen critical components (i.e., virtual I/O devices) using Intel MPK, a hardware primitive available in Intel CPUs. Redwood has fifteen fine-grained modules, each representing a single feature, which increases its configurability and flexibility. Our evaluations reveal that Redwood is as performant as the baseline Cloud Hypervisor, has a 50% smaller VMM image size and 50% fewer ROP gadgets, and is resilient to an array of CVEs.
I/O acceleration architectures, such as Data Plane Development Kit (DPDK) enhance VM performance by moving the data plane from the VMM to a separate userspace application. Since the VMM must share its VMs' sensitive information with accelerated applications, it can potentially degrade security. The dissertation's final contribution is the compartmentalization of a VM's sensitive data within an accelerated application using the Intel MPK hardware primitive. Our evaluations reveal that the technique does not cause any degradation in I/O performance and mitigates potential attacks and a class of CVEs. / Doctor of Philosophy / Instead of using software on a local device like a laptop or a mobile phone, consumers can access the same services from a remote high-end computer through high-speed Internet. This paradigm shift in computing is enabled by a remote computing infrastructure known as the "cloud,'' wherein networked server computers are deployed to execute third-party applications, often untrusted. Multiple applications are consolidated on the same server to save computer resources, but this can compromise security: a malicious application can steal co-existent applications' sensitive data. To enable resource consolidation and mitigate security attacks, applications are executed using a virtual machine (VM) -- an abstract machine that runs its own operating system (OS). Multiple VMs run on a single physical machine using two software systems: hypervisor and virtual machine monitor (VMM). They ensure that VMs are spatially isolated from each other, localizing security attacks. This dissertation focuses on enhancing the security of hypervisors and VMMs.
The hypervisor and VMM have multiple responsibilities toward supporting the OS running on the physical computer and VMs. The OS runs software called device drivers, which communicate with input-output (I/O) hardware such as network and storage devices. Device drivers, usually written by third-party and I/O device manufacturers, are highly vulnerable to security attacks. To mitigate such attacks, device drivers are often run inside special VMs, called driver domains. State-of-the-art driver domains use a general-purpose full-featured OS such as Linux, which has a large code base (in the tens of millions of lines of code) and thus, a large attack surface. To address this security challenge, the dissertation proposes using lightweight, single-purpose VMs called unikernels, as driver domain OSs. Their code size is smaller than that of full-featured OSs by as much as one order of magnitude, which yields security benefits.
We design and develop a unikernel-based driver domain, called Kite, for network and storage I/O devices. Kite uses NetBSD OS's rumprun unikernel for creating a driver domain OS. Using rumprun unikernel as a driver domain OS requires overcoming many technical challenges including a lack of support in a popular hypervisor such as Xen for performing I/O operations and communicating with rumprun, among others. Kite's design overcomes these challenges. Our empirical studies reveal that Kite is ten times less likely to be affected by future attacks and ten times faster to start than existing solutions for driver domains. At the same time, Kite domains match the performance of state-of-the-art driver domain OSs such as Linux.
The hypervisor and VMM are responsible for creating VMs and providing resources such as memory, processing power, and hardware device access. Existing VMMs are designed to be versatile. Thus, they include a large number of components that may not be used in many VM configurations, resulting in a large attack surface. In addition, VMM components are not well spatially separated from each other. Thus, vulnerabilities in one component can be exploited to compromise other components. To address these security challenges, the dissertation proposes a set of principles for i) customizing a VMM for each VM's needs, instead of using one VMM for all VMs, and ii) strongly isolating VMM components from each other. We realize these principles in a prototype implementation called Redwood. Redwood is highly configurable and separates critical I/O components from each other using a hardware primitive. Our evaluations reveal that Redwood significantly reduces the VMM's size and VMM's vulnerabilities while maintaining performance.
To enhance VM performance, I/O acceleration software is often used that eliminates communication overheads in the VMM. To do so, the VMM must share VMs' sensitive information with accelerated applications, which can potentially degrade security. The dissertation's final contribution is a technique that strongly isolates and limits access to sensitive information in the application using a hardware primitive. Our evaluations reveal that the technique improves security by localizing attacks without sacrificing performance.
|
2 |
Containers & Virtual machines : A performance, resource & power consumption comparisonLindström, Martin January 2022 (has links)
Due to the growth of cloud computing in recent years, the use of virtualization has exploded. Virtual machines (VMs) and containers are both virtualization technologies used to create isolated computing environments. While VMs are created and managed by hypervisors and need their own full guest operating system, containers share the kernel of the host computers and do not need a full guest operating system. Because of this, containers are rumored to have less overhead involved, yielding higher performance and less resource usage compared to VMs. In this paper we perform a literature study along with an empirical study to examine the differences between containers and virtual machines when it comes to cpu, memory and disk performance, cpu and memory resource utilization, and power consumption. To answer the question regarding performance, a series of benchmarks were run inside both a container and a VM. During these benchmarks the resource utilization of the host machine was also measured to answer the second question and to answer the third and final question the power draw was measured while some of the benchmarks were running. The results showed that the cpu performance was extremely similar between the two and memory performance seemed to be similar for the most part but fairly big differences were seen in favor of both depending on the benchmark in some cases. With disk performance the container was between 15-50% faster depending on the benchmark. As for resource usage, the cpu usage was the same for both technologies but memory usage differed greatly in favor of the container. The VM used between 3-4 GiB and the container between 70 MiB - 2.5 GiB depending on the benchmark. The power draw was the same for both technologies when under cpu and memory load but when idle the VM proved to draw around 40% more power.
|
3 |
New Architectures and Mechanisms for the Network Subsystem in Virtualized ServersRam, Kaushik Kumar 24 July 2013 (has links)
Machine virtualization has become a cornerstone of modern datacenters. It enables server consolidation as a means to reduce costs and increase efficiencies. The communication endpoints within the datacenter are now virtual machines (VMs), not physical servers. Consequently, the datacenter network now extends into the server and last hop switching occurs inside the server. Today, thanks to increasing core counts on processors, server VM densities are on the rise. This trend is placing enormous pressure on the network I/O subsystem and the last hop virtual switch to support efficient communication, both internal and external to the server. But the current state-of-the-art solutions fall short of these requirements. This thesis presents new architectures and mechanisms for the network subsystem in virtualized servers to build efficient virtualization platforms.
Specifically, there are three primary contributions in this thesis. First, it presents a new mechanism to reduce memory sharing overheads in driver domain-based I/O architectures. The key idea is to enable a guest operating system to reuse its I/O buffers that are shared with a driver domain. Second, it describes Hyper-Switch, a highly streamlined, efficient, and scalable software-based virtual switching architecture, specifically for hypervisors that support driver domains. The Hyper-Switch combines the best of the existing architectures by hosting the device drivers in a driver domain to isolate any faults and placing the virtual switch in the hypervisor to perform efficient packet switching. Further, the Hyper-Switch implements several optimizations, such as virtual machine state-aware batching, preemptive copying, and dynamic offloading of packet processing to idle CPU cores, to enable efficient packet processing, better utilization of the available CPU resources, and higher concurrency. This architecture eliminates the memory sharing overheads associated with driver domains. Third, this thesis proposes an alternate virtual switching architecture, called sNICh, which explores the idea of server/switch integration. The sNICh is a combined network interface card (NIC) and datacenter switching accelerator. This takes the Hyper-Switch architecture one step further. It offloads the data plane of the switch to the network device, eliminating driver domains entirely.
|
4 |
Virtual platforms: achieving performance and isolation properties on shared multicore serversTembey, Priyanka 13 January 2014 (has links)
Multicore servers in datacenter systems are routinely used to run multiple disparate application workload mixes. Analysis performed in Google's
datacenters show, for instance, components (i.e., processes) of up to 19 distinct applications to be co-deployed on a single multicore
node.
Virtualization technology further encourages this trend, increasing platform utilization via higher levels of workload consolidation. Systems software on these shared server nodes must meet challenges that include (a) providing end-to-end performance guarantees for possibly multiple applications and delivering global platform-level properties such as platform-level power or utilization caps., (b) mediating use of shared resources efficiently while offering isolation guarantees for multiple applications running on consolidated platforms to maintain their performance properties predictably, and
(c) meeting multiple dynamic competing application performance levels and platform-level properties efficiently, especially in oversubscribed systems.
The goals of this thesis addresses (a)-(c) as follows: (1) by developing system-level mechanisms for addressing challenges (a)-(c), (2) by demonstrating their ability to deliver improved application performance with less variability and improved platform efficiency, and (3) by creating principles and representative methods for realizing the isolation properties sought by applications and the efficiency sought for platforms. The concrete realization of these goals is a Virtual Platforms (VP) enabled hypervisor - where per application or platform-level policy objectives are expressed at the system-level via elastic resource abstractions, which may also change dynamically during system runtime. For multiple consolidated applications (and their virtual platforms), there are methods that monitor and mediate their use of shared platform resources to deliver improved isolation for predictable performance, while Merlin: a resource allocator for shared multicore servers makes it easier to implement higher-level arbitration policies while meeting multiple performance and platform properties.
As single-node multicore platforms evolve further from small numbers of homogeneous cores toward multiple sets or islands of potentially heterogeneous cores residing on a single chip, such platforms will have multiple resource managers managing their respective `islands' of resources. Though geared toward improved scalability and functionality, for applications spanning across multiple diverse resource islands to realize such opportunities, systems software must make it easier for them to interact with the island managers; and also help islands based systems achieve end-to-end performance properties via joint coordination amongst their island managers. In order to meet the challenges in maintaining performance objectives on future `scale-out' platforms, this thesis contributes inTune: a framework for inter-island operation, offering APIs and mechanisms that permit applications (and their virtual platforms) to interface with resource islands and their resource managers to jointly achieve application performance guarantees and global platform-level properties.
This thesis focuses on the management of compute, physical memory and memory bandwidth resources of single node server platforms, however the methods presented in this work can be extended to other resource types including network and storage resources.
InTune and Virtual-Platforms are implemented in the Xen hypervisor for x86 multi-core platforms with multiple NUMA memory nodes. Evaluation with representative parallel, web-based, and real-time applications and application mixes demonstrate the benefits of using our methods to achieve application performance and platform policy objectives.
|
5 |
Protection des systèmes informatiques vis-à-vis des malveillances : un hyperviseur de sécurité assisté par le matériel / Protection of the computer systems face to face hostilities : a hypersight of sight (security) assisted by the material ( equipment)Morgan, Benoît 06 December 2016 (has links)
L'utilisation des systèmes informatiques est aujourd'hui en pleine évolution. Le modèle classique qui consiste à associer à chaque utilisateur une machine physique qu'il possède et dont il va exploiter les ressources devient de plus en plus obsolète. Aujourd'hui, les ressources informatiques que l'on utilise peuvent être distribuées n'importe où dans l'Internet et les postes de travail du quotidien ne sont plus systématiquement des machines réelles. Cette constatation met en avant deux phénomènes importants qui sont à l'origine de l'évolution de notre utilisation de l'informatique : le Cloud computing et la virtualisation. Le Cloud computing (ou informatique en nuage en français) permet à un utilisateur d'exploiter des ressources informatiques, de granularités potentiellement très différentes, pendant une durée variable, qui sont à disposition dans un nuage de ressources. L'utilisation de ces ressources est ensuite facturée à l'utilisateur. Ce modèle peut être bien sûr avantageux pour une entreprise qui peut s'appuyer sur des ressources informatiques potentiellement illimitées, qu'elle n'a pas nécessairement à administrer et gérer elle-même. Elle peut ainsi en tirer un gain de productivité et un gain financier. Du point de vue du propriétaire des machines physiques, le gain financier lié à la location des puissances de calcul est accentué par une maximisation de l'exploitation de ces machines par différents clients.L'informatique en nuage doit donc pouvoir s'adapter à la demande et facilement se reconfigurer. Une manière d'atteindre ces objectifs nécessite notamment l'utilisation de machines virtuelles et des techniques de virtualisation associées. Même si la virtualisation de ressources informatiques n'est pas née avec le Cloud, l'avènement du Cloud a considérablement augmenté son utilisation. L'ensemble des fournisseurs d'informatique en nuage s'appuient aujourd'hui sur des machines virtuelles, qui sont beaucoup plus facilement déployables et migrables que des machines réelles.Ainsi, s'il est indéniable que l'utilisation de la virtualisation apporte un véritable intérêt pour l'informatique d'aujourd'hui, il est par ailleurs évident que sa mise en œuvre ajoute une complexité aux systèmes informatiques, complexité à la fois logicielle (gestionnaire de machines virtuelles) et matérielle (nouveaux mécanismes d'assistance à la virtualisation intégrés dans les processeurs). A partir de ce constat, il est légitime de se poser la question de la sécurité informatique dans ce contexte où l'architecture des processeurs devient de plus en plus complexe, avec des modes de plus en plus privilégiés. Etant donné la complexité des systèmes informatiques, l'exploitation de vulnérabilités présentes dans les couches privilégiées ne risque-t-elle pas d'être très sérieuse pour le système global ? Étant donné la présence de plusieurs machines virtuelles, qui ne se font pas mutuellement confiance, au sein d'une même machine physique, est-il possible que l'exploitation d'une vulnérabilité soit réalisée par une machine virtuelle compromise ? N'est-il pas nécessaire d'envisager de nouvelles architectures de sécurité prenant en compte ces risques ?C'est à ces questions que cette thèse propose de répondre. En particulier, nous présentons un panorama des différents problèmes de sécurité dans des environnements virtualisés et des architectures matérielles actuelles. A partir de ce panorama, nous proposons dans nos travaux une architecture originale permettant de s'assurer de l'intégrité d'un logiciel s'exécutant sur un système informatique, quel que soit son niveau de privilège. Cette architecture est basée sur une utilisation mixte de logiciel (un hyperviseur de sécurité développé par nos soins, s'exécutant sur le processeur) et de matériel (un périphérique de confiance, autonome, que nous avons également développé). / Computer system are nowadays evolving quickly. The classical model which consists in associating a physical machine to every users is becoming obsolete. Today, computer resources we are using can be distributed any place on the Internet and usual workstations are not systematically a physical machine anymore. This fact is enlightening two important phenomenons which are leading the evolution of the usage we make of computers: the Cloud computing and hardware virtualization. The cloud computing enable users to exploit computers resources, with a fine grained granularity, with a non-predefined amount of time, which are available into a cloud of resources. The resource usage is then financially charged to the user. This model can be obviously profitable for a company which wants to lean on a potentially unlimited amount of resources, without administrating and managing it. A company can thereby increase its productivity and furthermore save money. From the physical machine owner point of view, the financial gain related to the leasing of computing power is multiplied by the optimization of machine usage by different clients. The cloud computing must be able to adapt quickly to a fluctuating demand a being able to reconfigure itself quickly. One way to reach these goals is dependant of the usage of virtual machines and the associated virtualization techniques. Even if computer resource virtualization has not been introduced by the cloud, the arrival of the cloud it substantially increased its usage. Nowadays, each cloud provider is using virtual machines, which are much more deployable and movable than physical machines. Virtualization of computer resources was before essentially based on software techniques. But the increasing usage of virtual machines, in particular in the cloud computing, leads the microprocessor manufacturers to include virtualization hardware assistance mechanisms. Theses hardware extensions enable on the one hand to make virtualization process easier et on the other hand earn performances. Thus, some technologies have been created, such as Intel VT-x and VT-d or AMD-V by AMD and virtualization extensions by ARM. Besides, virtualization process needs the implementation of extra functionalities, to be able to manage the different virtual machine, schedule them, isolate and share hardware resources like memory and peripherals. These different functionalities are in general handled by a virtual machine manager, whose work can be more or less eased by the characteristics of the processor on which it is executing.In general, these technologies are introducing new execution modes on the processors, more and more privileged and complex.Thus, even if we can see that virtualization is a real interest for modern computer science, it is either clear that its implementation is adding complexity to computer systems, at the same time software and hardwarecomplexity. From this observation, it is legitimate do ask the question about computer security in this context where the architecture of processors is becoming more and more complex, with more and more privileged execution modes. Given the presence of multiple virtual machine, which do not trust each other, in the same physical machine, is it possible that the exploitation of one vulnerability be carried out by a compromised virtual machine ? Isn't it necessary to consider new security architectures taking these risks into account?This thesis is trying to answer to these questions. In particular, we are introducing state of the art security issues in virtualized environment of modern architectures. Starting from this work, we are proposing an originalarchitecture ensuring the integrity of a software being executed on a computer system, regardless its privilege level. This architecture is both using software, a security hypervisor, and hardware, a trusted peripheral, we have both designed and implemented.
|
6 |
Využití virtualizace v podnikovém prostředí / Using Virtualization in the Enterprise EnvironmentBartík, Branislav January 2016 (has links)
Diplomová práca sa zaoberá návrhom riešenia pre fiktívnu spoločnosť XYZ s.r.o., ako ušetriť náklady vybudovaním výukového prostredia pre jej zamestnancov za účelom rozvíjať ich zručnosti a skúsenosti v danom obore. Toto riešenie môže byť taktiež použité zamestnancami a študentami univerzít, aby si mohli otestovať podnikový softvér pre výukové účely. Autor zdôrazňuje výhody používania Cloud Computingu a otvoreného softvéru, ako aj využitie technológie Docker kontajnerov v kombinácii s komerčným softvérom ako je napr. IBM WebSphere Application Server.
|
7 |
LEIA: The Live Evidence Information Aggregator : A Scalable Distributed Hypervisor‐based Peer‐2‐Peer Aggregator of Information for Cyber‐Law Enforcement IHomem, Irvin January 2013 (has links)
The Internet in its most basic form is a complex information sharing organism. There are billions of interconnected elements with varying capabilities that work together supporting numerous activities (services) through this information sharing. In recent times, these elements have become portable, mobile, highly computationally capable and more than ever intertwined with human controllers and their activities. They are also rapidly being embedded into other everyday objects and sharing more and more information in order to facilitate automation, signaling that the rise of the Internet of Things is imminent. In every human society there are always miscreants who prefer to drive against the common good and engage in illicit activity. It is no different within the society interconnected by the Internet (The Internet Society). Law enforcement in every society attempts to curb perpetrators of such activities. However, it is immensely difficult when the Internet is the playing field. The amount of information that investigators must sift through is incredibly massive and prosecution timelines stated by law are prohibitively narrow. The main solution towards this Big Data problem is seen to be the automation of the Digital Investigation process. This encompasses the entire process: From the detection of malevolent activity, seizure/collection of evidence, analysis of the evidentiary data collected and finally to the presentation of valid postulates. This paper focuses mainly on the automation of the evidence capture process in an Internet of Things environment. However, in order to comprehensively achieve this, the subsequent and consequent procedures of detection of malevolent activity and analysis of the evidentiary data collected, respectively, are also touched upon. To this effect we propose the Live Evidence Information Aggregator (LEIA) architecture that aims to be a comprehensive automated digital investigation tool. LEIA is in essence a collaborative framework that hinges upon interactivity and sharing of resources and information among participating devices in order to achieve the necessary efficiency in data collection in the event of a security incident. Its ingenuity makes use of a variety of technologies to achieve its goals. This is seen in the use of crowdsourcing among devices in order to achieve more accurate malicious event detection; Hypervisors with inbuilt intrusion detection capabilities to facilitate efficient data capture; Peer to Peer networks to facilitate rapid transfer of evidentiary data to a centralized data store; Cloud Storage to facilitate storage of massive amounts of data; and the Resource Description Framework from Semantic Web Technologies to facilitate the interoperability of data storage formats among the heterogeneous devices. Within the description of the LEIA architecture, a peer to peer protocol based on the Bittorrent protocol is proposed, corresponding data storage and transfer formats are developed, and network security protocols are also taken into consideration. In order to demonstrate the LEIA architecture developed in this study, a small scale prototype with limited capabilities has been built and tested. The prototype functionality focuses only on the secure, remote acquisition of the hard disk of an embedded Linux device over the Internet and its subsequent storage on a cloud infrastructure. The successful implementation of this prototype goes to show that the architecture is feasible and that the automation of the evidence seizure process makes the otherwise arduous process easy and quick to perform.
|
8 |
Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud EnvironmentsHomem, Irvin January 2016 (has links)
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
|
Page generated in 0.1791 seconds