• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

High Performance Computational Fluid Dynamics on Clusters and Clouds : the ADAPT Experience / Haute performance pour le calcul de la fluide dynamique sur les clusters et les clouds : l’expérience ADAPT

Kissami, Imad 28 February 2017 (has links)
Dans cette thèse, nous présentons notre travail de recherche dans le domaine du calcul haute performance en mécanique des fluides (CFD) pour architectures de type cluster et cloud. De manière générale, nous nous proposons de développer un solveur efficace, appelé ADAPT, pour la résolution de problèmes de CFD selon une vue classique correspondant à des développements en MPI et selon une vue qui nous amène à représenter ADAPT comme un graphe de tâches destinées à être ordonnancées sur une plateforme de type cloud computing. Comme première contribution, nous proposons une parallélisation de l’équation de diffusion-convection couplée àun système linéaire en 2D et en 3D à l’aide de MPI. Une parallélisation à deux niveaux est utilisée dans notre implémentation pour exploiter au mieux les capacités des machines multi-coeurs. Nous obtenons une distribution équilibrée de la charge de calcul en utilisant la décomposition du domaine à l’aide de METIS, ainsi qu’une résolution pertinente de notre système linéaire creux de très grande taille en utilisant le solveur parallèle MUMPS (Solveur MUltifrontal Massivement Parallèle). Notre deuxième contribution illustre comment imaginer la plateforme ADAPT, telle que représentée dans la premièrecontribution, comme un service. Nous transformons le framework ADAPT (en fait, une partie du framework)en DAG (Direct Acyclic Graph) pour le voir comme un workflow scientifique. Ensuite, nous introduisons de nouvelles politiques à l’intérieur du moteur de workflow RedisDG, afin de planifier les tâches du DAG, de manière opportuniste.Nous introduisons dans RedisDG la possibilité de travailler avec des machines dynamiques (elles peuvent quitter ou entrer dans le système de calcul comme elles veulent) et une approche multi-critères pour décider de la “meilleure”machine à choisir afin d’exécuter une tâche. Des expériences sont menées sur le workflow ADAPT pour illustrer l’efficacité de l’ordonnancement et des décisions d’ordonnancement dans le nouveau RedisDG. / In this thesis, we present our research work in the field of high performance computing in fluid mechanics (CFD) for cluster and cloud architectures. In general, we propose to develop an efficient solver, called ADAPT, for problemsolving of CFDs in a classic view corresponding to developments in MPI and in a view that leads us to represent ADAPT as a graph of tasks intended to be ordered on a cloud computing platform. As a first contribution, we propose a parallelization of the diffusion-convection equation coupled to a linear systemin 2D and 3D using MPI. A two-level parallelization is used in our a implementation to take advantage of thecurrent distributed multicore machines. A balanced distribution of the computational load is obtained by using the decomposition of the domain using METIS, as well as a relevant resolution of our very large linear system using the parallel solver MUMPS (Massive Parallel MUltifrontal Solver). Our second contribution illustrates how to imagine the ADAPT framework, as depicted in the first contribution, as a Service. We transform the framework (in fact, a part of the framework) as a DAG (Direct Acyclic Graph) in order to see it as a scientific workflow. Then we introduce new policies inside the RedisDG workflow engine, in order to schedule tasks of the DAG, in an opportunistic manner. We introduce into RedisDG the possibility to work with dynamic workers (they can leave or enter into the computing system as they want) and a multi-criteria approach to decide on the “best” worker to choose to execute a task. Experiments are conducted on the ADAPT workflow to exemplify howfine is the scheduling and the scheduling decisions into the new RedisDG.
252

Data Protection over Cloud

January 2016 (has links)
abstract: Data protection has long been a point of contention and a vastly researched field. With the advent of technology and advances in Internet technologies, securing data has become much more challenging these days. Cloud services have become very popular. Given the ease of access and availability of the systems, it is not easy to not use cloud to store data. This however, pose a significant risk to data security as more of your data is available to a third party. Given the easy transmission and almost infinite storage of data, securing one's sensitive information has become a major challenge. Cloud service providers may not be trusted completely with your data. It is not very uncommon to snoop over the data for finding interesting patterns to generate ad revenue or divulge your information to a third party, e.g. government and law enforcing agencies. For enterprises who use cloud service, it pose a risk for their intellectual property and business secrets. With more and more employees using cloud for their day to day work, business now face a risk of losing or leaking out information. In this thesis, I have focused on ways to protect data and information over cloud- a third party not authorized to use your data, all this while still utilizing cloud services for transfer and availability of data. This research proposes an alternative to an on-premise secure infrastructure giving exibility to user for protecting the data and control over it. The project uses cryptography to protect data and create a secure architecture for secret key migration in order to decrypt the data securely for the intended recipient. It utilizes Intel's technology which gives it an added advantage over other existing solutions. / Dissertation/Thesis / Masters Thesis Computer Science 2016
253

Characterization of Cost Excess in Cloud Applications

January 2012 (has links)
abstract: The pay-as-you-go economic model of cloud computing increases the visibility, traceability, and verifiability of software costs. Application developers must understand how their software uses resources when running in the cloud in order to stay within budgeted costs and/or produce expected profits. Cloud computing's unique economic model also leads naturally to an earn-as-you-go profit model for many cloud based applications. These applications can benefit from low level analyses for cost optimization and verification. Testing cloud applications to ensure they meet monetary cost objectives has not been well explored in the current literature. When considering revenues and costs for cloud applications, the resource economic model can be scaled down to the transaction level in order to associate source code with costs incurred while running in the cloud. Both static and dynamic analysis techniques can be developed and applied to understand how and where cloud applications incur costs. Such analyses can help optimize (i.e. minimize) costs and verify that they stay within expected tolerances. An adaptation of Worst Case Execution Time (WCET) analysis is presented here to statically determine worst case monetary costs of cloud applications. This analysis is used to produce an algorithm for determining control flow paths within an application that can exceed a given cost threshold. The corresponding results are used to identify path sections that contribute most to cost excess. A hybrid approach for determining cost excesses is also presented that is comprised mostly of dynamic measurements but that also incorporates calculations that are based on the static analysis approach. This approach uses operational profiles to increase the precision and usefulness of the calculations. / Dissertation/Thesis / Ph.D. Computer Science 2012
254

A Cloud based Continuous Delivery Software Developing System on Vlab Platform

January 2013 (has links)
abstract: Continuous Delivery, as one of the youngest and most popular member of agile model family, has become a popular concept and method in software development industry recently. Instead of the traditional software development method, which requirements and solutions must be fixed before starting software developing, it promotes adaptive planning, evolutionary development and delivery, and encourages rapid and flexible response to change. However, several problems prevent Continuous Delivery to be introduced into education world. Taking into the consideration of the barriers, we propose a new Cloud based Continuous Delivery Software Developing System. This system is designed to fully utilize the whole life circle of software developing according to Continuous Delivery concepts in a virtualized environment in Vlab platform. / Dissertation/Thesis / M.S. Computer Science 2013
255

Analytics as a Service : Analysis of services in Microsoft Azure

Winberg, André, Golrang, Ramin Alberto January 2017 (has links)
No description available.
256

Uma estratÃgia de gerenciamento de infraestrutura de datacenters baseada em tÃcnicas de monitoramento distribuÃdo e controle centralizado / A Datacenter Infrastructure Management strategy based on distributed monitoring and centralized control

Thiago Teixeira SÃ 29 August 2013 (has links)
A ComputaÃÃo em Nuvem desponta como um paradigma de utilizaÃÃo de recursos computacionais segundo o qual infraestrutura de hardware, software e plataformas para o desenvolvimento de novas aplicaÃÃes sÃo oferecidas como serviÃos disponÃveis em escala global. Tais serviÃos sÃo disponibilizados por meio de datacenters de larga escala onde à recorrente o emprego de tecnologias de virtualizaÃÃo para o uso compartilhado de recursos. Neste contexto, a gerÃncia eficiente da infraestrutura do datacenter pode levar a eduÃÃes significativas de seus custos de operaÃÃo. Este trabalho apresenta uma estratÃgia de gerenciamento de infraestrutura de datacenters virtualizados que aplica tÃcnicas de monitoramento distribuÃdo combinadas a aÃÃes centralizadas de controle. Tal estratÃgia busca diminuir os efeitos de sobrecarga observados em modelos tradicionais de gerÃncia baseados em um nà controlador que acumula responsabilidades de controle e monitoramento. Por conseguinte, busca-se elevar o poder de escalabilidade da infraestrutura e melhorar sua eciÃncia energÃtica sem comprometer a Qualidade de ServiÃo (QoS) oferecida ao usuÃrio final. A anÃlise de desempenho da estratÃgia proposta à realizada atravÃs de mÃltiplos experimentos de simulaÃÃo realizados com ferramentas voltadas especificamente para a modelagem de nuvens computacionais e com suporte à representaÃÃo do consumo energÃtico da infraestrutura simulada.
257

PuzzlEdu: uma proposta de educação como serviço

DANTAS, Eric Rommel Galvão 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:00:53Z (GMT). No. of bitstreams: 2 arquivo7074_1.pdf: 7795838 bytes, checksum: eb603d5478f475a63c7fdccfd492d5f9 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Com a inclusão digital as Tecnologias da Informação e Comunicação (TICs) são essenciais na educação, seja ela presencial, semipresencial ou a distância. Com a propagação das mais recentes e avançadas tecnologias, a exemplo da computação nas nuvens (cloud computing), há a possibilidade de disponibilizar diversos recursos educacionais como serviços para a comunidade. Isso contribui para uma maior abrangência da educação, com redução de custos e integração ao desenvolvimento tecnológico atual. Utilizando-se dos conceitos de Hardware como Serviço e Software como Serviço, e a isso integrar os recursos educacionais disponíveis, é possível vislumbrar um novo conceito: a Aprendizagem como Serviço (Learning as a Service LaaS). Na LaaS, tudo passa a ser disponibilizado na nuvem computacional, oferecendo aprendizagem como um serviço ou uma prestação de serviços para a comunidade. Como forma de demonstrar as potencialidades da LaaS, foi desenvolvido o software educativo PuzzlEdu, disponibilizado como serviço, tendo por objetivo auxiliar alunos e professores no ensino e aprendizagem de linguagens de programação orientada a objetos, executando na plataforma cloud, integrando às vantagens desse ambiente, requisitos como usabilidade, flexibilidade e extensibilidade. Utilizou-se a metodologia GQM (Goal/Question/Metric) para avaliar a proposta e mensurar seus aspectos de qualidade. De forma a atender o objetivo mencionado, este trabalho realizou uma avaliação de usabilidade e funcionalidade do sistema proposto para prover a LaaS, a partir de três tipos de perfis: (i) alunos que nunca tiveram contato com Programação Orientada a Objetos (POO); (ii) alunos com conhecimento de POO; e finalmente, (iii) professores que ministram ou já ministraram disciplinas com conceitos de POO. Isso possibilitou comprovar o quanto a LaaS poderá contribuir para uma futuro promissor para a aprendizagem dentro da educação
258

Probabilistic Risk Assessment in Clouds: Models and Algorithms

Palhares, André Vitor de Almeida 08 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-04T17:17:29Z No. of bitstreams: 2 dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T17:17:29Z (GMT). No. of bitstreams: 2 dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-03-08 / Cloud reliance is critical to its success. Although fault-tolerance mechanisms are employed by cloud providers, there is always the possibility of failure of infrastructure components. We consequently need to think proactively of how to deal with the occurrence of failures, in an attempt to minimize their effects. In this work, we draw the risk concept from probabilistic risk analysis in order to achieve this. In probabilistic risk analysis, consequence costs are associated to failure events of the target system, and failure probabilities are associated to infrastructural components. The risk is the expected consequence of the whole system. We use the risk concept in order to present representative mathematical models for which computational optimization problems are formulated and solved, in a Cloud Computing environment. In these problems, consequence costs are associated to incoming applications that must be allocated in the Cloud and the risk is either seen as an objective function that must be minimized or as a constraint that should be limited. The proposed problems are solved either by optimal algorithm reductions or by approximation algorithms with provably performance guarantees. Finally, the models and problems are discussed from a more practical point of view, with examples of how to assess risk using these solutions. Also, the solutions are evaluated and results on their performance are established, showing that they can be used in the effective planning of the Cloud.
259

Building a high throughput microscope simulator using the Apache Kafka streaming framework

Lugnegård, Lovisa January 2018 (has links)
Today microscopy imaging is a widely used and powerful method for investigating biological processes. The microscopes can produce large amounts of data in a short time. It is therefore impossible to analyse all the data thoroughly because of time and cost constraints. HASTE (Hierarchical Analysis of Temporal and Spatial Image Data) is a collaborative research project between Uppsala University, AstraZeneca and Vironova which addresses this specific problem. The idea is to analyse the image data in real time to make fast decisions on whether to analyse further, store or throw away the data. To facilitate the development process of this system a microscope simulator has been designed and implemented with large focus on parameters relating to data throughput. Apart from building the simulator the framework Apache Kafka has been evaluated for streaming large images. The results from this project are both a working simulator which shows a performance similar to that of the microscope and an evaluation of Apache Kafka showing that it is possible to stream image data with the framework.
260

Cloud Computing – A review of Confidentiality and Privacy

Lindén, Simon January 2016 (has links)
With the introduction of cloud computing the computation got distributed, virtualized and scalable. This also meant that customers of cloud computing gave away some of their control of their system. That led to a heighten importance of how to handle security in the cloud, for both provider and customer. Since security is such a big subject the focus of this thesis is on confidentiality and privacy, both closely related to how to handle personal data. With the help of a systematic literature review in this thesis, current challenges and possible mitigations are presented in some different areas and concerning both the cloud provider and the cloud customer. The conclusion of the thesis is that cloud computing in itself have matured a lot since the early 2000’s and all of the challenges provided have possible mitigations. However, the exact implementation of said mitigation will differ depending on cloud customer and the exact application developed as well as the exact service provided by the cloud provider. In the end it will all boil down to a process that involves technology, employees and policies and with that can any user secure its cloud application.

Page generated in 0.1192 seconds