• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 13
  • 10
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 59
  • 41
  • 39
  • 35
  • 28
  • 26
  • 26
  • 23
  • 21
  • 21
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A Framework for Efficient Management of Fault Tolerance in Cloud Data Centres and High-Performance Computing Systems: An Investigation and Performance analysis of a Cloud Based Virtual Machine Success and Failure Rate in a typical Cloud Computing Environment and Prediction Methods

Mohammed, Bashir January 2019 (has links)
Cloud computing is increasingly attracting huge attention both in academic research and industry initiatives and has been widely used to solve advanced computation problem. As cloud datacentres continue to grow in scale and complexity, the risk of failure of Virtual Machines (VM) and hosts running several jobs and processing large amount of user request increases and consequently becomes even more difficult to predict potential failures within a datacentre. However, even though fault tolerance continues to be an issue of growing concern in cloud and HPC systems, mitigating the impact of failure and providing accurate predictions with enough lead time remains a difficult research problem. Traditional existing fault-tolerance strategies such as regular check-point/restart and replication are not adequate due to emerging complexities in the systems and do not scale well in the cloud due to resource sharing and distributed systems networks. In the thesis, a new reliable Fault Tolerance scheme using an intelligent optimal strategy is presented to ensure high system availability, reduced task completion time and efficient VM allocation process. Specifically, (i) A generic fault tolerance algorithm for cloud data centres and HPC systems in the cloud was developed. (ii) A verification process is developed to a fully dimensional VM specification during allocation in the presence of fault. In comparison to existing approaches, the results obtained shows an increase in success rate of the VMs, a reduction in response time of VM allocation and an improved overall performance. (iii) A failure prediction model is further developed, and the predictive capabilities of machine learning is explored by applying several algorithms to improve the accuracy of prediction. Experimental results indicate that the average prediction accuracy of the proposed model when predicting failure is about 90% accurate compared to existing algorithms, which implies that the approach can effectively predict potential system and application failures within the system.
102

Safe Application Execution on Resource-Constrained IoT Devices Using WebAssembly

Engstrand, Fredrik January 2024 (has links)
The Internet of Things (IoT) comprises many small, embedded devices that operate on severe resource constraints concerning energy, bandwidth, and memory footprints. Software for such devices has traditionally been implemented using relatively low-level languages such as C, which makes it susceptible to introducing bugs or flaws that can compromise the security of the device. This thesis adds interpreted WebAssembly (WASM) bytecode execution to Contiki-NG – an operating system for the next generation IoT devices. This is done using an open-source WASM runtime called WebAssembly Micro Runtime (WAMR). It creates an isolated and secure environment for applications to be executed in that has restricted access to the host operating system. To support the event-driven approach of Contiki-NG, the bytecode execution can be interrupted and resumed as needed, allowing the operating system to handle pending events without significant delays. The result is a way for applications written in a variety of programming languages to be safely executed in Contiki-NG and to interact with its APIs. When tested on Nordic Semiconductor's nRF52840 System-on-Chip (SoC), applications executed as bytecode resulted in an increase in binary size of 2.7-3.1x, and a performance penalty of around 9.2x for C-generated bytecode, and 10.3x for Rust-generated bytecode. For less compute-intensive applications, the performance penalty is not as prominent but still displays a sizable increase in energy consumption compared to native execution.
103

AOT kompilering för minskad starttid av Java-baserade tjänster / AOT compilation to reduce startup time for Java-based services

Pergler, Oscar January 2023 (has links)
Software engineering architectures, such as microservices and serverless, have been increasingly adopted for their ability to address architectural challenges through a modular approach. This modularity involves isolating components and assigning them specific responsibilities independently of other components. Java, a computationally robust language, is frequently utilized in microservice architectures; however, the Java Virtual Machine (JVM) is often criticized for its slow and unpredictable startup times in these environments. This study investigates the startup time, response time, and CPU load of Java services compiled with either the JVM or GraalVM. A microservice system comprising three testable Java services was developed and monitored to identify any differences in the aforementioned metrics. The results indicate that GraalVM outperforms the JVM in terms of startup time. However, the impact of GraalVM on response time is not statistically significant enough to reject the null hypothesis. Additionally, GraalVM demonstrates lower CPU usage during cold starts. From an environmental perspective it is important to note that the shortened start time potentially comes at the cost of an increased development time depending on the complexity of the system and the seniority of the developer. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
104

Optimizing Virtual Machine I/O Performance in Cloud Environments

Lu, Tao 01 January 2016 (has links)
Maintaining closeness between data sources and data consumers is crucial for workload I/O performance. In cloud environments, this kind of closeness can be violated by system administrative events and storage architecture barriers. VM migration events are frequent in cloud environments. VM migration changes VM runtime inter-connection or cache contexts, significantly degrading VM I/O performance. Virtualization is the backbone of cloud platforms. I/O virtualization adds additional hops to workload data access path, prolonging I/O latencies. I/O virtualization overheads cap the throughput of high-speed storage devices and imposes high CPU utilizations and energy consumptions to cloud infrastructures. To maintain the closeness between data sources and workloads during VM migration, we propose Clique, an affinity-aware migration scheduling policy, to minimize the aggregate wide area communication traffic during storage migration in virtual cluster contexts. In host-side caching contexts, we propose Successor to recognize warm pages and prefetch them into caches of destination hosts before migration completion. To bypass the I/O virtualization barriers, we propose VIP, an adaptive I/O prefetching framework, which utilizes a virtual I/O front-end buffer for prefetching so as to avoid the on-demand involvement of I/O virtualization stacks and accelerate the I/O response. Analysis on the traffic trace of a virtual cluster containing 68 VMs demonstrates that Clique can reduce inter-cloud traffic by up to 40%. Tests of MPI Reduce_scatter benchmark show that Clique can keep VM performance during migration up to 75% of the non-migration scenario, which is more than 3 times of the Random VM choosing policy. In host-side caching environments, Successor performs better than existing cache warm-up solutions and achieves zero VM-perceived cache warm-up time with low resource costs. At system level, we conducted comprehensive quantitative analysis on I/O virtualization overheads. Our trace replay based simulation demonstrates the effectiveness of VIP for data prefetching with ignorable additional cache resource costs.
105

[en] A SPECIFICATION FOR A JAVA REGISTER-BASED MACHINE / [pt] UMA ESPECIFICAÇÃO DE MÁQUINA DE REGISTRADORES PARA JAVA

GUILHERME CAMPOS HAZAN 21 May 2007 (has links)
[pt] A linguagem Java foi definida tendo como foco a portabilidade. O código gerado pela compilação é interpretado por uma máquina virtual, e não diretamente pelo processador destino, como um programa em C. Este código intermediário, também conhecido como bytecode, é a chave da portabilidade de Java. Os Bytecodes Java usam uma pilha para manipular os operandos das instruções. O uso de pilha tem suas vantagens e desvantagens. Dentre as vantagens, podemos citar a simplicidade da implementação do compilador e da máquina virtual. A principal desvantagem é a redução na velocidade de execução dos programas, devido à necessidade de se mover os operandos para a pilha e retirar dela o resultado, gerando um aumento no número de instruções que devem ser processadas. Diversos estudos indicam que máquinas virtuais baseadas em registradores podem ser mais rápidas que as baseadas em pilha. Decidimos criar uma nova especificação de bytecodes, específicos para máquinas virtuais baseadas em registradores. Esperamos com isso obter um aumento no desempenho das aplicações. / [en] The Java language was created with a focus on portability. The code generated by the compiler is interpreted by a virtual machine, and not directly by the target processor, like programs written in C. This intermediate code, also known as bytecode, is the key to Java's portability. The Java Bytecodes use a stack to manipulate the instruction operands. The use of stack has its their pros and cons. Among the advantages, we can cite the simplicity of implementation of the compiler and virtual machine. On the other hand, there is a speed reduction in the program's execution, due to the need to move the operands to and from the stack, and retrieve results from it, increasing the number of instructions that are processed. Much study has been done that indicating that register-based virtual machines can be faster than the ones based on stacks. Based on this, we decided to create a new bytecode specification, proper for a virtual machine based on registers. By doing this, we hope to obtain an increase in an application's performance.
106

The Usefulness of Programming Languages Beyond Java

Jonsson, Alexander January 2019 (has links)
Beyond Java, new programming languages running on the Java virtual machine (JVM) have been developed, such as Kotlin, Scala, JRuby and Clojure amongst others. Since all those languages compile to Java bytecode, they should theoretically be able to be used together in a project. This paper investigates if it is possible and what benefits it gives using those programming languages together in a project. The languages chosen to be used together were Jython, Scala and Kotlin. An experiment was conducted where in a single project, each programming language was assigned a problem to be solved. The experiment was then conducted in two iterations where in each iteration, the problems to be solved was assigned to a different programming language. From the experiment it was shown that using those languages together in a project was possible but resulted in some complications needed to be solved. It was also shown that the following division amongst the languages worked best in the present use case: Jython for graphical handling, Scala for calculating and computing and Kotlin for data-handling.
107

Uma metodologia para caracterização de aplicações em ambientes de computação nas nuvens. / A methodology of application characterization in cloud computing environment.

Ogura, Denis Ryoji 04 October 2011 (has links)
Computação nas nuvens e um novo termo criado para expressar uma tendência tecnológica recente que virtualiza o data center. Esse conceito busca um melhor aproveitamento dos recursos computacionais e dos aplicativos corporativos, virtualizados por meio de programas de virtualização de sistema operacional (SO), plataformas, infraestruturas, softwares, entre outros. Essa virtualização ocorre por intermédio de maquinas virtuais (MV) para executar aplicativos nesse ambiente virtualizado. Contudo, uma MV pode ser configurada de tal forma que seu desempenho poderá ter um atraso no processamento por conta de gargalo(s) em algum hardware alocado. A fim de maximizar a alocação do hardware na criação da MV, foi desenvolvido um método de caracterização de aplicações para a coleta de dados de desempenho e busca da melhor configuração de MV. A partir desse estudo, pode-se identificar pelo workload a classificação do tipo de aplicação e apresentar o ambiente mais adequado, um recomendado e não recomendado. Dessa forma, a tendência de se obter um desempenho satisfatório nos ambientes virtualizados pode ser descoberta pela caracterização dos programas, o que possibilita avaliar o comportamento de cada cenário e identificar situações importantes para seu bom funcionamento. Para provar essa linha de raciocínio, foram executados programas mono e multiprocessador em ambientes de monitores de maquinas virtuais. Os resultados obtidos foram satisfatórios e estão de acordo com cada característica de aplicação conhecida previamente. Porem, podem ocorrer situações de exceção nesse método, principalmente quando o monitor de maquinas virtuais, e submetido a processamentos intensos. Com isso, a aplicação pode ter um atraso no processamento por conta do gargalo de processamento no monitor de maquinas virtuais, o que modifica o ambiente ideal dessa aplicação. Portanto, este estudo apresenta um método para identificar a configuração ideal para a execução de um aplicativo. / Cloud computing represents a new age, raised to express a new technology trending that virtualizes the data center. This concept advanced to make a better use of the computational resources and corporate application, virtualizing through the programs of operating systems virtualization, platform, infrastructure, software, etc. This virtualization occurs through the virtual machine (VM) to execute virtualized applications in this environment. However, a VM may be configured in such a way that the performance delays on processing, due to overhead or other hardware allocation itself. In order to maximize the hardware allocation on MV creation, it was developed a methodology of application characterization to collect performance data and achieve the best VM configuration. After this study, based on workload metric, it is possible to identify the classification of the application type and present the best configuration, the recommended environment and the not recommended. This way, the trend to achieve a satisfactory performance in virtualized environment may be discovered through the program characterization, which possibly evaluate the behavior of each scenario and identify important conditions for its proper operation. In order to prove this argument, mono and multi core applications under monitors of virtual machines were executed. The collected results were satisfactory and are aligned with each previously known application characteristic. However, it may occur exceptions in this method, mainly when the monitor of the virtual machine monitor is submitted with high volume of processing.
108

Toward harnessing a Java high-level language virtual machine for supporting software testing / Utilizando uma máquina virtual Java como apoio à atividade de teste de software

Durelli, Vinicius Humberto Serapilha 01 October 2013 (has links)
High-level language virtual machines (HLL VMs) have been playing a key role as a mechanism for implementing programming languages. Languages that run on these execution environments have many advantages over languages that are compiled to native code. These advantages have led HLL VMs to gain broad acceptance in both academy and industry. However, much of the research in this area has been devoted to boosting the performance of these execution environments. Few eorts have attempted to introduce features that automate or facilitate some software engineering activities, including software testing. This research argues that HLL VMs provide a reasonable basis for building an integrated software testing environment. To this end, two software testing features that build on the characteristics of a Java virtual machine (JVM) were devised. The purpose of the rst feature is to automate weak mutation. Augmented with mutation support, the chosen JVM achieved speedups of as much as 95% in comparison to a strong mutation tool. To support the testing of concurrent programs, the second feature is concerned with enabling the deterministic re-execution of Java programs and exploration of new scheduling sequences / Máquinas virtuais de linguagens de programação têm desempenhado um papel importante como mecanismo para a implementação de linguagens de programação. Linguagens voltadas para esses ambientes de execução possuem várias vantagens em relação às linguagens compiladas. Essas vantagens fizeram com que tais ambientes de execução se tornassem amplamente utilizados pela indústria e academia. Entretanto, a maioria dos estudos nessa area têm se dedicado a aprimorar o desempenho desses ambientes de execução e poucos têm enfocado o desenvolvimento de funcionalidades que automatizem ou facilitem a condução de atividades de engenharia de software, incluindo atividades de teste de software. Este trabalho apresenta indícios de que máquinas virtuais de linguagens de programação podem apoiar a criação de ambientes de teste de software integrado. Para tal, duas funcionalidades que tiram proveito das características de uma máquina virtual Java foram desenvolvidas. O propósito da primeira funcionalidade e automatizar a condução de atividades de mutação fraca. Após a implementação de tal funcionalidade na máquina virtual Java selecionada, observou-se um desempenho até 95% melhor em relação a uma ferramenta de mutação forte. Afim de apoiar o teste de programas concorrentes, a segunda funcionalidade permite reexecutá-los de forma determinística além de automatizar a exploração de que novas sequências de escalonamento
109

Estratégias para uso eficiente de recursos em centros de dados considerando consumo de CPU e RAM / Strategies for efficient usage of resources in data centers considering the consumption of CPU and RAM

Castro, Pedro Henrique Pires de 04 August 2014 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T19:59:39Z No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T20:00:48Z (GMT) No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-02-05T20:00:48Z (GMT). No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-08-04 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Cloud computing is being consolidated as a new distributed systems paradigm, offering computing resources in a virtualized way and with unprecedented levels of flexibility, reliability, and scalability. Unfortunately, the benefits of cloud computing come at a high cost with regard to energy, mainly because of one of its core enablers, the data center. There are a number of proposals that seek to enhance energy efficiency in data centers. However, most of them focus only on the energy consumed by CPU and ignore the remaining hardware, e.g., RAM. In this work, we show the considerable impact that RAM can have on total energy consumption, particularly in servers with large amounts of this memory. We also propose three new approaches for dynamic consolidation of virtual machines (VMs) that take into account both CPU and RAM usage. We have implemented and evaluated our proposals in the CloudSim simulator using real-world traces and compared the results with state-of-the-art solutions. By adopting a wider view of the system, our proposals are able to reduce not only energy consumption but also the number of SLA violations, i.e., they provide a better service at a lower cost. / A computação em nuvem tem levado os sistemas distribuídos a um novo patamar, oferecendo recursos computacionais de forma virtualizada, flexível, robusta e escalar. Essas vantagens, no entanto, surgem juntamente com um alto consumo de energia nos centros de dados, ambientes que podem ter até centenas de milhares de servidores. Existem muitas propostas para alcançar eficiência energética em centros de dados para computação em nuvem. Entretanto, muitas propostas consideram apenas o consumo proveniente do uso de CPU e ignoram os demais componentes de hardware, e.g., RAM. Neste trabalho, mostramos o impacto considerável que RAM pode ter sobre o consumo total de energia, principalmente em servidores com grandes quantidades dessa memória. Também propomos três novas abordagens para consolidação dinâmica de máquinas virtuais, levando em conta tanto o consumo de CPU quanto de RAM. Nossas propostas foram implementadas e avaliadas no simulador CloudSim utilizando cargas de trabalho do mundo real. Os resultados foram comparados com soluções do estado-da-arte. Pela adoção de uma visão mais ampla do sistema, nossas propostas não apenas são capazes de reduzir o consumo de energia como também reduzem violações de SLA, i.e., proveem um serviço melhor a um custo mais baixo.
110

Uma metodologia para caracterização de aplicações em ambientes de computação nas nuvens. / A methodology of application characterization in cloud computing environment.

Denis Ryoji Ogura 04 October 2011 (has links)
Computação nas nuvens e um novo termo criado para expressar uma tendência tecnológica recente que virtualiza o data center. Esse conceito busca um melhor aproveitamento dos recursos computacionais e dos aplicativos corporativos, virtualizados por meio de programas de virtualização de sistema operacional (SO), plataformas, infraestruturas, softwares, entre outros. Essa virtualização ocorre por intermédio de maquinas virtuais (MV) para executar aplicativos nesse ambiente virtualizado. Contudo, uma MV pode ser configurada de tal forma que seu desempenho poderá ter um atraso no processamento por conta de gargalo(s) em algum hardware alocado. A fim de maximizar a alocação do hardware na criação da MV, foi desenvolvido um método de caracterização de aplicações para a coleta de dados de desempenho e busca da melhor configuração de MV. A partir desse estudo, pode-se identificar pelo workload a classificação do tipo de aplicação e apresentar o ambiente mais adequado, um recomendado e não recomendado. Dessa forma, a tendência de se obter um desempenho satisfatório nos ambientes virtualizados pode ser descoberta pela caracterização dos programas, o que possibilita avaliar o comportamento de cada cenário e identificar situações importantes para seu bom funcionamento. Para provar essa linha de raciocínio, foram executados programas mono e multiprocessador em ambientes de monitores de maquinas virtuais. Os resultados obtidos foram satisfatórios e estão de acordo com cada característica de aplicação conhecida previamente. Porem, podem ocorrer situações de exceção nesse método, principalmente quando o monitor de maquinas virtuais, e submetido a processamentos intensos. Com isso, a aplicação pode ter um atraso no processamento por conta do gargalo de processamento no monitor de maquinas virtuais, o que modifica o ambiente ideal dessa aplicação. Portanto, este estudo apresenta um método para identificar a configuração ideal para a execução de um aplicativo. / Cloud computing represents a new age, raised to express a new technology trending that virtualizes the data center. This concept advanced to make a better use of the computational resources and corporate application, virtualizing through the programs of operating systems virtualization, platform, infrastructure, software, etc. This virtualization occurs through the virtual machine (VM) to execute virtualized applications in this environment. However, a VM may be configured in such a way that the performance delays on processing, due to overhead or other hardware allocation itself. In order to maximize the hardware allocation on MV creation, it was developed a methodology of application characterization to collect performance data and achieve the best VM configuration. After this study, based on workload metric, it is possible to identify the classification of the application type and present the best configuration, the recommended environment and the not recommended. This way, the trend to achieve a satisfactory performance in virtualized environment may be discovered through the program characterization, which possibly evaluate the behavior of each scenario and identify important conditions for its proper operation. In order to prove this argument, mono and multi core applications under monitors of virtual machines were executed. The collected results were satisfactory and are aligned with each previously known application characteristic. However, it may occur exceptions in this method, mainly when the monitor of the virtual machine monitor is submitted with high volume of processing.

Page generated in 0.0186 seconds