Spelling suggestions: "subject:"[een] VIRTUAL MACHINE"" "subject:"[enn] VIRTUAL MACHINE""
1 |
Verified Java bytecode verificationKlein, Gerwin. January 2003 (has links) (PDF)
München, Techn. University, Diss., 2003.
|
2 |
Ramasse-miettes générationnel et incémental gérant les cycles et les gros objets en utilisant des frames délimitésAdam, Sébastien January 2008 (has links) (PDF)
Ces dernières années, des recherches ont été menées sur plusieurs techniques reliées à la collection des déchets. Plusieurs découvertes centrales pour le ramassage de miettes par copie ont été réalisées. Cependant, des améliorations sont encore possibles. Dans ce mémoire, nous introduisons des nouvelles techniques et de nouveaux algorithmes pour améliorer le ramassage de miettes. En particulier, nous introduisons une technique utilisant des cadres délimités pour marquer et retracer les pointeurs racines. Cette technique permet un calcul efficace de l'ensemble des racines. Elle réutilise des concepts de deux techniques existantes, card marking et remembered sets, et utilise une configuration bidirectionelle des objets pour améliorer ces concepts en stabilisant le surplus de mémoire utilisée et en réduisant la charge de travail lors du parcours des pointeurs. Nous présentons aussi un algorithme pour marquer récursivement les objets rejoignables sans utiliser de pile (éliminant le gaspillage de mémoire habituel). Nous adaptons cet algorithme pour implémenter un ramasse-miettes copiant en profondeur et améliorer la localité du heap. Nous améliorons l'algorithme de collection des miettes older-first et sa version générationnelle en ajoutant une phase de marquage garantissant la collection de toutes les miettes, incluant les structures cycliques réparties sur plusieurs fenêtres. Finalement, nous introduisons une technique pour gérer les gros objets. Pour tester nos idées, nous avons conçu et implémenté, dans la machine virtuelle libre Java SableVM, un cadre de développement portable et extensible pour la collection des miettes. Dans ce cadre, nous avons implémenté des algorithmes de collection semi-space, older-first et generational. Nos expérimentations montrent que la technique du cadre délimité procure des performances compétitives pour plusieurs benchmarks. Elles montrent aussi que, pour la plupart des benchmarks, notre algorithme de parcours en profondeur améliore la localité et augmente ainsi la performance. Nos mesures de la performance générale montrent que, utilisant nos techniques, un ramasse-miettes peut délivrer une performance compétitive et surpasser celle des ramasses-miettes existants pour plusieurs benchmarks. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Ramasse-Miettes, Machine Virtuelle, Java, SableVM.
|
3 |
Efficient shared object space support for distributed Java virtual machineLam, King-tin., 林擎天. January 2012 (has links)
Given the popularity of Java, extending the standard Java virtual machine (JVM) to become cluster-aware effectively brings the vision of transparent horizontal scaling of applications to fruition. With a set of cluster-wide JVMs orchestrated as a virtually single system, thread-level parallelism in Java is no longer confined to one multiprocessor. An unmodified multithreaded Java application running on such a Distributed JVM (DJVM) can scale out transparently, tapping into the vast computing power of the cluster.
While this notion creates an easy-to-use and powerful parallel programming paradigm, research on DJVMs has remained largely at the proof-of-concept stage where successes were proven using trivial scientific computing workloads only. Real-life Java applications with commercial server workloads have not been well-studied on DJVMs. Their natures including complex and sometimes huge object graphs, irregular access patterns and frequent synchronizations are key scalability hurdles. To design a scalable DJVM for real-life applications, we identify three major unsolved issues calling for a top-to-bottom overhaul of traditional systems.
First, we need a more time- and space-efficient cache coherence protocol to support fine-grained object sharing over the distributed shared heap. The recent prevalence of concurrent data structures with heavy use of volatile fields has added complications to the matter. Second, previous generations of DJVMs lack true support for memory-intensive applications. While the network-wide aggregated physical memory can be huge, mutual sharing of huge object graphs like Java collections may cause nodes to eventually run out of local heap space because the cached copies of remote objects, linked by active references, can’t be arbitrarily discarded. Third, thread affinity, which determines the overall communication cost, is vital to the DJVM performance. Data access locality can be improved by collocating highly-correlated threads, via dynamic thread migration. Tracking inter-thread correlations trades profiling costs for reduced object misses. Unfortunately, profiling techniques like active correlation tracking used in page-based DSMs would entail prohibitively high overheads and low accuracy when ported to fine-grained object-based DJVMs.
This dissertation presents technical contributions towards all these problems. We use a dual-protocol approach to address the first problem. Synchronized (lock-based) and volatile accesses are handled by a home-based lazy release consistency (HLRC) protocol and a sequential consistency (SC) protocol respectively. The two protocols’ metadata are maintained in a conflict-free, memory-efficient manner. With further techniques like hierarchical passing of lock ownerships, the overall communication overheads of fine-grained distributed object sharing are pruned to a minimal level. For the second problem, we develop a novel uncaching mechanism to safely break a huge active object graph. When a JVM instance runs low on free memory, it initiates an uncaching policy, which eagerly assigns nulls to selected reference fields, thus detaching some older or less useful cached objects from the root set for reclamation. Careful orchestration is made between uncaching, local garbage collection and the coherence protocol to avoid possible data races. Lastly, we devise lightweight sampling-based profiling methods to derive inter-thread correlations, and a profile-guided thread migration policy to boost the system performance. Extensive experiments have demonstrated the effectiveness of all our solutions. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
4 |
Algorithms and Systems for Virtual Machine Scheduling in Cloud InfrastructuresLi, Wubin January 2014 (has links)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, etc.) are provisioned as metered on-demand services over net- works, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine (VM) is one of the most com- monly used resource units in which business services are encapsulated. VM schedul- ing optimization, i.e., finding optimal placement schemes for VMs and reconfigu- rations according to the changing conditions, becomes challenging issues for cloud infrastructure providers and their customers. The thesis investigates the VM scheduling problem in two scenarios: (i) single- cloud environments where VMs are scheduled within a cloud aiming at improving criteria such as load balancing, carbon footprint, utilization, and revenue, and (ii) multi-cloud scenarios where a cloud user (which could be the owner of the VMs or a cloud infrastructure provider) schedules VMs across multiple cloud providers, target- ing optimization for investment cost, service availability, etc. For single-cloud scenar- ios, taking load balancing as the objective, an approach to optimal VM placement for predictable and time-constrained peak loads is presented. In addition, we also present a set of heuristic methods based on fundamental management actions (namely, sus- pend and resume physical machines, VM migration, and suspend and resume VMs), continuously optimizing the profit for the cloud infrastructure provider regardless of the predictability of the workload. For multi-cloud scenarios, we identify key re- quirements for service deployment in a range of common cloud scenarios (including private clouds, bursted clouds, federated clouds, multi-clouds, and cloud brokering), and present a general architecture to meet these requirements. Based on this architec- ture, a set of placement algorithms tuned for cost optimization under dynamic pricing schemes are evaluated. By explicitly specifying service structure, component relation- ships, and placement constraints, a mechanism is introduced to enable service owners the ability to influence placement. In addition, we also study how dynamic cloud scheduling using VM migration can be modeled using a linear integer programming approach. The primary contribution of this thesis is the development and evaluation of al- gorithms (ranging from combinatorial optimization formulations to simple heuristic algorithms) for VM scheduling in cloud infrastructures. In addition to scientific pub- lications, this work also contributes software tools (in the OPTIMIS project funded by the European Commissions Seventh Framework Programme) that demonstrate the feasibility and characteristics of the approaches presented. / I datormoln tillhandahålls datorresurser (dvs., nätverk, servrar, lagring, applikationer, etc.) som tjänster åtkomliga via Internet. Resurserna, som t.ex. virtuella maskiner (VMs), kan snabbt och enkelt allokeras och frigöras alltefter behov. De potentiellt snabba förändringarna i hur många och hur stora VMs som behövs leder till utmanade schedulerings- och konfigureringsproblem. Scheduleringsproblemen uppstår både för infrastrukturleverantörer som behöver välja vilka servrar olika VMs ska placeras på inom ett moln och deras kunder som behöver välja vilka moln VMs ska placeras på. Avhandlingen fokuserar på VM-scheduleringsproblem i dessa två scenarier, dvs (i) enskilda moln där VMs ska scheduleras för att optimera lastbalans, energiåtgång, resursnyttjande och ekonomi och (ii) situationer där en molnanvändare ska välja ett eller flera moln för att placera VMs för att optimera t.ex. kostnad, prestanda och tillgänglighet för den applikation som nyttjar resurserna. För det förstnämnda scenar- iot presenterar avhandlingen en scheduleringsmetod som utifrån förutsägbara belast- ningsvariationer optimerar lastbalansen mellan de fysiska datorresurserna. Därtill pre- senteras en uppsättning heuristiska metoder, baserade på fundamentala resurshanter- ingsåtgärder, fö att kontinuerligt optimera den ekonomiska vinsten för en molnlever- antör, utan krav på lastvariationernas förutsägbarhet. För fallet med flera moln identifierar vi viktiga krav för hur resurshanteringstjänster ska konstrueras för att fungera väl i en rad konceptuellt olika fler-moln-scenarier. Utifrån dessa krav definierar vi också en generell arkitektur som kan anpassas till dessa scenarier. Baserat pp vår arkitektur utvecklar och utvärderar vi en uppsättning algoritmer för VM-schedulering avsedda att minimera kostnader för användning av molninfrastruktur med dynamisk prissättning. Användaren ges genom ny funktionalitet möjlighet att explicit specificera relationer mellan de VMs som allokeras och andra bivillkor för hur de ska placeras. Vi demonstrerar också hur linjär heltals- programmering kan användas för att optimera detta scheduleringsproblem. Avhandlingens främsta bidrag är utveckling och utvärdering av nya metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram. metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram.
|
5 |
JAVA VIRTUAL MACHINE DESIGN FOR EMBEDDED SYSTEMS: ENERGY, TIME PREDICTABILITY AND PERFORMANCESun, Yu 01 December 2010 (has links)
Embedded systems can be found everywhere in our daily lives. Due to the great variety of embedded devices, the platform independent Java language provides a good solution for embedded system development. Java virtual machine (JVM) is the most critical component of all kinds of Java platforms. Hence, it is extremely important to study the special design of JVM for embedded systems. The key challenges of designing a successful JVM for embedded systems are energy efficiency, time predictability and performance, which are investigated in this dissertation, respectively. We first study the energy issue of JVM on embedded systems. With a cycle-accurate simulator, we study each stage of Java execution separately to test the effects of different configurations in both software and hardware. After that, an alternative Adaptive Optimization System (AOS) model is introduced, which estimated the cost/benefit using energy data instead of running time. We tuned the parameters of this model to study how to improve the dynamic compilation and optimization in Jikes RVM in terms of energy consumption. In order to further reduce the energy dissipation of JVM on embedded systems, we study adaptive drowsy cache control for Java applications, where JVM can be used to make better decision on drowsy cache control. We explore the impact of different phases of Java applications on the timing behavior of cache usage. Then we propose several techniques to adaptively control drowsy cache to reduce energy consumption with minimal impact on performance. It is observed that traditional Java code generation and instruction fetch path are not efficient. So we study three hardware-based code caching strategies, which attempt to write and read the dynamically generated Java code faster and more energy-efficiently. Time predictability is another key challenge for JVM on embedded systems. So we exploit multicore computing to reduce the timing unpredictability caused by dynamic compilation and adaptive optimization. Our goal is to retain high performance comparable to that of traditional dynamic compilation and, at the same time, obtain better time predictability for JVM. We study pre-compilation techniques to utilize another core more efficiently. Furthermore, we develop Pre-optimization on Another Core (PoAC) scheme to replace AOS in Jikes JVM, which is very sensitive to execution time variation and impacts time predictability greatly. Finally, we propose two new approaches that automatically parallelizes Java programs at run-time, in order to meet the performance challenge of JVM on embedded systems. These approaches rely on run-time trace information collected during program execution, and dynamically recompiles Java byte code that can be executed in parallel. One approach utilizes trace information to improve traditional loop parallelization, and the other parallelizes traces instead of loop iterations.
|
6 |
Virtual Machine Management for Dynamic Vehicular CloudsRefaat, Tarek January 2017 (has links)
Vehicular clouds involve a dynamic environment where virtual machines are hosted on moving vehicles, leading to frequent changes in the data center network topology. These frequent topological changes require frequent virtual machine migrations in order to meet the service level agreements with cloud users. Such topology changes include fluctuations in connectivity, signal strength and quality. Few studies address vehicles as potential virtual machine hosts, while there is a significant opportunity in capitalizing on underutilized resources. Due to the rapidly changing environment of a vehicular cloud, hosts frequently change or leave coverage. As such, virtual machine management and migration schemes are necessary to ensure cloud subscribers have a satisfactory level of access to the resources. This thesis addresses the need for virtual machine management for the vehicular cloud. First, a mobility model is proposed and utilized to test a set of novel Vehicular Virtual Machine Migration (VVMM) schemes: VVMM-U (Uniform), VVMM-LW (Least Workload), VVMM-MA (Mobility Aware) and MDWLAM (Mobility and Destination Workload Aware Migration). Their performance is evaluated with respect to a set of metrics through simulations with varying levels of vehicular traffic congestion, virtual machine sizes and load restriction levels. The most advanced scheme (MDWLAM) takes into account the workload and mobility of the original host as well as those of the potential destinations. By doing so a valid destination will both have time to receive the workload and migrate the new load when necessary. The behavior of various algorithms is compared and the MDWLAM has been shown to demonstrate the best performance, exhibiting migration drop rates that are negligibly small. Finally, an integer linear program formulation based on a modified single source shortest path problem is presented, tested and successfully shown to be a benchmark that can be used in comparison to the proposed heuristics.
|
7 |
YAVM: Yet Another Virtual MachineKalappuraikal Sivadas, Nived January 2011 (has links)
No description available.
|
8 |
The design and implementation of memory management of virtual machine in user-spaceChu, Ching-hao 21 June 2011 (has links)
With the popularity of Smart Handset devices, much more discussion of the design and development of embedded systems, some of embedded system problems such as the stability and efficiency of the device, the easy-operating interface design and a variety of application design are more and more important.
Application development in the embedded systems is often limited by the system resource such as memory. Compared with common computer systems, embedded system got very limited memory. Therefore, program development in the embedded systems often need to consider the problem of insufficient memory, and program design must also avoid using too large number of memory allocation to cause the program take up a lot of system memory, affecting the system operation, causing the system hazard.
Java is one of the common programming languages using in the embedded system development. Based on the high portability, Java programs can easily port to another system environment by using the Java virtual machine. However, the Java programming is also restricted, such as Java programming is not allowed to access memory space direct, and the memory allocation and release are all controlled by the system, rather than users.
The purpose of the research is to design a set of Java programming tools. It can be applied to Android Dalvik virtual machine, which is responsible for operating the memory allocation and release, to allow users to control memory so as to ensure that memory can be reused to avoid the system hazard caused by the system memory leak problem.
|
9 |
Virtualios aplinkos saugos sistemos prototipas / Virtual environment security system prototypeŽirgulis, Mantas 05 November 2013 (has links)
Magistro darbe „virtualios aplinkos saugos sistemos prototipas“ aprašyta ir suprojektuota virtualios aplinkos saugos sistema (toliau VASS), kuri, virtualioms mašinoms esančiom pasyvioje būsenoje, užtikrina pagrindinius informacijos saugumo tikslus – konfidencialumą, vientisumą ir prieinamumą. Virtualią mašiną, esančią pasyvioje būsenoje, pagrindinės operacinės sistemos ar įvairių trečių šalių įrankių pagalba, galima prijungti (angl. mount) kaip atskirą particiją, o jos failų sistemoje naršyti tarsi paprastame kietajame diske. Šis funkcionalumas sukelia galimas saugumo grėsmes duomenų konfidencialumui bei vientisumui, kadangi niekas negali užtikrintas, jog prijungtos virtualios mašinos failų sistema nebus modifikuota. / Virtual machine is software application where operational system and programs can be installed in the same manner as it can be done on the computer hardware. The virtual machine in turned off mode is only a file. This file can be located to the separated partition using virtual platform or third parties programs and can be browsed as in ordinary computer file system. This functionality opens a weak spot because there are no means to ensure that when virtual machine is off no system files will be modified.
|
10 |
Analysing Performance Effects of Deduplication on Virtual Machine StorageKauküla, Marcus January 2017 (has links)
Virtualization is a widely used technology for running multiple operating systems on a single set of hardware. Virtual machines running the same operating system have been shown to have a large amount of identical data, in such cases deduplication have been shown to be very effective in eliminating duplicated data. This study aimed to investigate if the storage savings are as large as shown in previous research, as well as to investigate if there are any negative performance impacts when using deduplication. The selected performance variables are resource utilisation and disk performance. The selected deduplication implementations are SDFS and ZFS deduplication. Each file system is tested with its respective non-deduplicated file systems, ext4 and ZFS. The results show that the storage savings are between 72,5 % and 73,65 % while the resource utilisation is generally higher when using deduplication. The results also show that deduplication using SDFS has an overall large negative disk performance impact, while ZFS deduplication has a general disk performance increase.
|
Page generated in 0.0564 seconds