• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1670
  • 332
  • 250
  • 173
  • 127
  • 116
  • 52
  • 51
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3346
  • 1655
  • 730
  • 503
  • 439
  • 421
  • 399
  • 337
  • 325
  • 320
  • 316
  • 315
  • 305
  • 265
  • 259
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Reaching High Availability in Connected Car Backend Applications

Yadav, Arpit 08 September 2017 (has links) (PDF)
The connected car segment has high demands on the exchange of data between the car on the road, and a variety of services in the backend. By the end of 2020, connected services will be mainstream automotive offerings, according to Telefónica - Connected Car Industry Report 2014 the overall number of vehicles with built-in internet connectivity will increase from 10% of the overall market today to 90% by the end of the decade [1]. Connected car solutions will soon become one of the major business drivers for the industry; they already have a significant impact on existing solutions development and aftersales market. It has been more than three decades since the introduction of the first software component in cars, and since then a vast amount of different services has been introduced, creating an ecosystem of complex applications, architectures, and platforms. The complexity of the connected car ecosystem results into a range of new challenges. The backend applications must be scalable and flexible enough to accommodate loads created by the random user and device behavior. To deliver superior uptime, back-end systems must be highly integrated and automated to guarantee lowest possible failure rate, high availability, and fastest time-to-market. Connected car services increasingly rely on cloud-based service delivery models for improving user experiences and enhancing features for millions of vehicles and their users on a daily basis. Nowadays, the software applications become more complex, and the number of components that are involved and interact with each other is extremely large. In such systems, if a fault occurs, it can easily propagate and can affect other components resulting in a complex problem which is difficult to detect and debugg, therefore a robust and resilient architecture is needed which ensures the continuous availability of system in the wake of component failures, making the overall system highly available. The goal of the thesis is to gain insight into the development of highly available applications and to explore the area of fault tolerance. This thesis outlines different design patterns and describes the capabilities of fault tolerance libraries for Java platform, and design the most appropriate solution for developing a highly available application and evaluate the behavior with stress and load testing using Chaos Monkey methodologies.
252

User Experience-Based Provisioning Services in Vehicular Clouds

Aloqaily, Moayad January 2016 (has links)
Today, the increasing number of applications based on the Internet of Things, as well as advances in wireless communication, information and communication technology, and mobile cloud computing have allowed users to access a wide range of resources while mobile. Vehicular clouds are considered key elements for today’s intelligent transportation systems. They are outfitted with equipment to enable applications and services for vehicle drivers, surrounding vehicles, pedestrians and third parties. As vehicular cloud computing has become more popular, due to its ability to improve driver and vehicle safety and provide provisioning services and applications, researchers and industry have growing interest in the design and development of vehicular networks for emerging applications. Though vehicle drivers can now access a variety of on-demand resources en route via vehicular network service providers, the development of vehicular cloud provisioning services has many challenges. In this dissertation, we examine the most critical provisioning service challenges drivers face, including, cost, privacy and latency. To this point, very little research has addressed these issues from the driver perspective. Privacy and service latency are certainly emerging challenges for drivers, as are service costs since this is a relatively new financial concept. Motivated by the Quality of Experience paradigm and the concept of the Trusted Third Party, we identify and investigate these challenges and examine the limitations and requirements of a vehicular environment. We found no research that addressed these challenges simultaneously, or investigated their effect on one another. We have developed a Quality of Experience framework that provides scalability and reduces congestion overhead for users. Furthermore, we propose two theory-based frameworks to manage on-demand service provision in vehicular clouds: Auction-driven Multi-objective Provisioning and a Multiagent/Multiobjective Interaction Game System. We present different approaches to these, and show through analytical and simulation results that our potential schemes help drivers minimize costs and latency, and maximize privacy.
253

Analýza vybraných zahraničních Government Cloud řešení a návrh koncepce pro ČR / The Analysis of Selected Foreign Government Cloud Solutions and the Design of the Concept for the Czech Republic

Petráš, František January 2015 (has links)
This work deals with Government Cloud (G-Cloud) solutions which provide shared ICT services in the field of public authorities. These solutions are deployed mainly because they bring significant financial savings for government. The aim of this work is to analyse three selected G-Cloud solutions and based on this analysis propose a concept of the solution suitable for the Czech Republic. G-Cloud solutions used in Denmark, Great Britain and Slovakia were selected for this analysis. First, a set of unified criteria was established to allow an objective analysis. Through these criteria, analysed solutions were described in terms of concept, offered services, delivery process and service usage, scheduling the entire solution and its component parts, security, legislation, and providing information to interested parties. The main benefits of this work are the comparison of analysed solutions and the draft concept of the solution for the Czech Republic.
254

Fatores críticos de sucesso em projetos de ERP cloud: uma análise quantitativa do cenário brasileiro / Critical success factors in cloud ERP projects: a quantitative analysis in brazilian scenario

Gheller, Angélica Aparecida 16 April 2017 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2017-08-04T15:25:05Z No. of bitstreams: 1 Angelica Aparecida Gheller.pdf: 2718845 bytes, checksum: f9f4c521bf4c3d6860e62f70ee4b1799 (MD5) / Made available in DSpace on 2017-08-04T15:25:05Z (GMT). No. of bitstreams: 1 Angelica Aparecida Gheller.pdf: 2718845 bytes, checksum: f9f4c521bf4c3d6860e62f70ee4b1799 (MD5) Previous issue date: 2017-04-16 / The main objective of this research was to investigate the critical success factors for cloud ERP projects in the scenario of Brazilian companies. In addition, the secondary objectives were to identify the perception of benefits resulting from this implementation, in the aspect of improving organizational performance and the elaboration of a matrix to support the management of such projects, based on a review of the literature and the data collected from the field research. The method of the data collection was a self-directed questionnaire, and the statistical method was the canonical correlation, to answer the main research question. In addition, a factorial analysis and a multiple linear regression were run to investigate possible relationships between the independent variables (critical success factors) and the dependent variables (benefits) of the research model. The results obtained did not validate all the critical factors present in the literature, and those that were proven empirically were regrouped in new dimensions, being: security information and alignment of compliance, and communication and management of stakeholders. / O objetivo principal deste estudo foi explorar os principais fatores críticos de sucesso de projetos ERP cloud, no contexto do cenário de empresas brasileiras. Adicionalmente, os objetivos secundários consistiram na identificação da percepção de benefícios resultantes dessa implementação, sob o aspecto de melhoria do desempenho organizacional e a elaboração, com base na revisão da literatura sobre o tema e nos dados coletados na pesquisa de campo, de uma matriz para apoiar a gestão de projetos dessa natureza. Para responder à questão de pesquisa, o método de coleta de dados foi o questionário autodirigido e o método estatístico foi a correlação canônica. Adicionalmente, foi realizada uma análise fatorial e uma regressão linear múltipla para investigar com mais profundidade possíveis relações entre as variáveis independentes (fatores críticos de sucesso) e as variáveis dependentes (benefícios) do modelo de pesquisa. Os resultados obtidos não validaram todos os fatores críticos presentes na literatura, e os que se comprovaram empiricamente foram reagrupados em novas dimensões, sendo: segurança da informação e alinhamento do compliance, e comunicação e gestão de stakeholders.
255

Efficient and secure mobile cloud networking / Réseau cloud mobile et sécurisé

Bou Abdo, Jacques 18 December 2014 (has links)
MCC (Mobile Cloud Computing) est un candidat très fort pour le NGN (Next Generation Network) qui permet aux utilisateurs mobiles d’avoir une mobilité étendue, une continuité de service et des performances supérieures. Les utilisateurs peuvent s’attendre à exécuter leurs travaux plus rapidement, avec une faible consommation de batterie et à des prix abordables ; mais ce n’est pas toujours le cas. Diverses applications mobiles ont été développées pour tirer parti de cette nouvelle technologie, mais chacune de ces applications possède ses propres exigences. Plusieurs MCA (Mobile Cloud Architectures) ont été proposées, mais aucune n'a été adaptée pour toutes les applications mobiles, ce qui a mené à une faible satisfaction du client. De plus, l'absence d'un modèle d'affaires (business model) valide pour motiver les investisseurs a empêché son déploiement à l'échelle de production. Cette thèse propose une nouvelle architecture de MCA (Mobile Cloud Architecture) qui positionne l'opérateur de téléphonie mobile au cœur de cette technologie avec un modèle d'affaires de recettes. Cette architecture, nommée OCMCA (Operator Centric Mobile Cloud Architecture), relie l'utilisateur d’un côté et le fournisseur de services Cloud (CSP) de l'autre côté, et héberge un cloud dans son réseau. La connexion OCMCA / utilisateur peut utiliser les canaux multiplex menant à un service beaucoup moins cher pour les utilisateurs, mais avec plus de revenus, et de réduire les embouteillages et les taux de rejet pour l'opérateur. La connexion OCMCA / CSP est basée sur la fédération, ainsi un utilisateur qui a été enregistré avec n’importe quel CSP, peut demander que son environnement soit déchargé de cloud hébergé par l'opérateur de téléphonie mobile afin de recevoir tous les services et les avantages de OCMCA.Les contributions de cette thèse sont multiples. Premièrement, nous proposons OCMCA et nous prouvons qu'il a un rendement supérieur à toutes les autres MCA (Mobile Cloud Architectures). Le modèle d'affaires (business model) de cette architecture se concentre sur la liberté de l'abonnement de l'utilisateur, l'utilisateur peut ainsi être abonné à un fournisseur de cloud et être toujours en mesure de se connecter via cette architecture à son environnement à l'aide du déchargement et de la fédération... / Mobile cloud computing is a very strong candidate for the title "Next Generation Network" which empowers mobile users with extended mobility, service continuity and superior performance. Users can expect to execute their jobs faster, with lower battery consumption and affordable prices; however this is not always the case. Various mobile applications have been developed to take advantage of this new technology, but each application has its own requirements. Several mobile cloud architectures have been proposed but none was suitable for all mobile applications which resulted in lower customer satisfaction. In addition to that, the absence of a valid business model to motivate investors hindered its deployment on production scale. This dissertation proposes a new mobile cloud architecture which positions the mobile operator at the core of this technology equipped with a revenue-making business model. This architecture, named OCMCA (Operator Centric Mobile Cloud Architecture), connects the user from one side and the Cloud Service Provider (CSP) from the other and hosts a cloud within its network. The OCMCA/user connection can utilize multicast channels leading to a much cheaper service for the users and more revenues, lower congestion and rejection rates for the operator. The OCMCA/CSP connection is based on federation, thus a user who has been registered with any CSP, can request her environment to be offloaded to the mobile operator's hosted cloud in order to receive all OCMCA's services and benefits...
256

Digitalisering av redovisningssystem : En kvalitativ studie som undersöker vilka faktorer som påverkar beslutet att implementera ett molnbaserat redovisningssystem

Kocas, Linnea, Nyblom, Linda January 2023 (has links)
Digitalization has been going on for a long period of time and has affected large parts of society. Accounting is one of the areas where digitalization has been crucial for efficiency and improved systems. Cloud-based accounting systems, offered by various cloud providers, are one of these systems that has been affected. Collection of data has taken place through a qualitative structure with a multiple case study design, all interviewees have been involved in decision-making around cloud-based accounting systems. By using the service model, TAM-model and TOE-framework, an analysis has been made based on cloud services and what they offer. Based on previous research and the collected data, the service model shows that the advantages of implementing cloud-based accounting systems are cost savings and flexibility. Previous research mentions that the disadvantages of implementation are aspects regarding security and privacy which the collected data partly confirms, however it also shows that developments have taken place in the area. The TAM-model examines factors that influence the acceptance and adoption of cloud- based accounting systems based on usability and ease of use. Previous research and collected data show a positive correlation between these variables and cloud-based accounting systems. The TOE-framework includes factors that influence technology use and technology adoption of cloud-based accounting systems based on technical, organizational and environmental aspects. Previous research and collected data show a positive correlation between these aspects and cloud- based accounting systems.
257

Arquitecturas para la computación de altas prestaciones en la nube. Aplicación a procesos de geometría computacional

Sánchez-Ribes, Víctor 03 March 2024 (has links)
La computación en nube es una de las tecnologías que están dando forma al mundo actual. En este sentido, las empresas deben hacer uso de esta tecnología para seguir siendo competitivas en un mercado globalizado. Los sectores tradicionales de la industria manufacturera (calzado, muebles, juguetes, entre otros) se caracterizan principalmente por tener un diseño intensivo y un trabajo de fabricación en la producción de nuevos productos de temporada. Este trabajo se realiza a través de software de modelado y fabricación 3D. Este software se conoce habitualmente como “CAD/CAM”. Se basa principalmente en la aplicación de primitivas de modelado y cálculo geométrico. La externalización de procesamiento es el método utilizado para externalizar la carga de procesamiento a la nube. Esta técnica aporta muchas ventajas a los procesos de diseño y fabricación: reducción del coste inicial para pequeñas y medianas empresas que necesitan una gran capacidad de cálculo, infraestructura muy flexible para proporcionar potencia de cálculo ajustable, prestación de servicios informáticos “CAD/CAM” a diseñadores de todo el mundo, etc.. Sin embargo, la externalización del cálculo geométrico a la nube implica varios retos que deben superarse para que la propuesta sea viable. El objetivo de este trabajo es explorar nuevas formas de aprovechar los dispositivos especializados y mejorar las capacidades de las “GPUs” mediante la revisión y comparación de las técnicas de programación paralela disponibles, y proponer la configuración óptima de la arquitectura “Cloud” y el desarrollo de aplicaciones para mejorar el grado de paralelización de los dispositivos de procesamiento especializados, sirviendo de base para su mayor explotación en la nube para pequeñas y medianas empresas. Finalmente, este trabajo muestra los experimentos utilizados para validar la propuesta tanto a nivel de arquitectura de comunicación como de la programación en las "GPU" y aporta unas conclusiones derivadas de esta experimentación.
258

A Performance Comparison of VMware GPU Virtualization Techniques in Cloud Gaming

2016 March 1900 (has links)
Cloud gaming is an application deployment scenario which runs an interactive gaming application remotely in a cloud according to the commands received from a thin client and streams the scenes as a video sequence back to the client over the Internet, and it is of interest to both research community and industry. The academic community has developed some open-source cloud gaming systems such as GamingAnywhere for research study, while some industrial pioneers such as Onlive and Gaikai have succeeded in gaining a large user base in the cloud gaming market. Graphical Processing Unit (GPU) virtualization plays an important role in such an environment as it is a critical component that allows virtual machines to run 3D applications with performance guarantees. Currently, GPU pass-through and GPU sharing are the two main techniques of GPU virtualization. The former enables a single virtual machine to access a physical GPU directly and exclusively, while the latter makes a physical GPU shareable by multiple virtual machines. VMware Inc., one of the most popular virtualization solution vendors, has provided concrete implementations of GPU pass-through and GPU sharing. In particular, it provides a GPU pass-through solution called Virtual Dedicated Graphics Acceleration (vDGA) and a GPU-sharing solution called Virtual Shared Graphics Acceleration (vSGA). Moreover, VMware Inc. recently claimed it realized another GPU sharing solution called vGPU. Nevertheless, the feasibility and performance of these solutions in cloud gaming has not been studied yet. In this work, an experimental study is conducted to evaluate the feasibility and performance of GPU pass-through and GPU sharing solutions offered by VMware in cloud gaming scenarios. The primary results confirm that vDGA and vGPU techniques can fit the demands of cloud gaming. In particular, these two solutions achieved good performance in the tested graphics card benchmarks, and gained acceptable image quality and response delay for the tested games.
259

Security and Privacy of Sensitive Data in Cloud Computing

Gholami, Ali January 2016 (has links)
Cloud computing offers the prospect of on-demand, elastic computing, provided as a utility service, and it is revolutionizing many domains of computing. Compared with earlier methods of processing data, cloud computing environments provide significant benefits, such as the availability of automated tools to assemble, connect, configure and reconfigure virtualized resources on demand. These make it much easier to meet organizational goals as organizations can easily deploy cloud services. However, the shift in paradigm that accompanies the adoption of cloud computing is increasingly giving rise to security and privacy considerations relating to facets of cloud computing such as multi-tenancy, trust, loss of control and accountability. Consequently, cloud platforms that handle sensitive information are required to deploy technical measures and organizational safeguards to avoid data protection breakdowns that might result in enormous and costly damages. Sensitive information in the context of cloud computing encompasses data from a wide range of different areas and domains. Data concerning health is a typical example of the type of sensitive information handled in cloud computing environments, and it is obvious that most individuals will want information related to their health to be secure. Hence, with the growth of cloud computing in recent times, privacy and data protection requirements have been evolving to protect individuals against surveillance and data disclosure. Some examples of such protective legislation are the EU Data Protection Directive (DPD) and the US Health Insurance Portability and Accountability Act (HIPAA), both of which demand privacy preservation for handling personally identifiable information. There have been great efforts to employ a wide range of mechanisms to enhance the privacy of data and to make cloud platforms more secure. Techniques that have been used include: encryption, trusted platform module, secure multi-party computing, homomorphic encryption, anonymization, container and sandboxing technologies. However, it is still an open problem about how to correctly build usable privacy-preserving cloud systems to handle sensitive data securely due to two research challenges. First, existing privacy and data protection legislation demand strong security, transparency and audibility of data usage. Second, lack of familiarity with a broad range of emerging or existing security solutions to build efficient cloud systems. This dissertation focuses on the design and development of several systems and methodologies for handling sensitive data appropriately in cloud computing environments. The key idea behind the proposed solutions is enforcing the privacy requirements mandated by existing legislation that aims to protect the privacy of individuals in cloud-computing platforms. We begin with an overview of the main concepts from cloud computing, followed by identifying the problems that need to be solved for secure data management in cloud environments. It then continues with a description of background material in addition to reviewing existing security and privacy solutions that are being used in the area of cloud computing. Our first main contribution is a new method for modeling threats to privacy in cloud environments which can be used to identify privacy requirements in accordance with data protection legislation. This method is then used to propose a framework that meets the privacy requirements for handling data in the area of genomics. That is, health data concerning the genome (DNA) of individuals. Our second contribution is a system for preserving privacy when publishing sample availability data. This system is noteworthy because it is capable of cross-linking over multiple datasets. The thesis continues by proposing a system called ScaBIA for privacy-preserving brain image analysis in the cloud. The final section of the dissertation describes a new approach for quantifying and minimizing the risk of operating system kernel exploitation, in addition to the development of a system call interposition reference monitor for Lind - a dual sandbox. / “Cloud computing”, eller “molntjänster” som blivit den vanligaste svenska översättningen, har stor potential. Molntjänster kan tillhandahålla exaktden datakraft som efterfrågas, nästan oavsett hur stor den är; dvs. molntjäns-ter möjliggör vad som brukar kallas för “elastic computing”. Effekterna avmolntjänster är revolutionerande inom många områden av datoranvändning.Jämfört med tidigare metoder för databehandling ger molntjänster mångafördelar; exempelvis tillgänglighet av automatiserade verktyg för att monte-ra, ansluta, konfigurera och re-konfigurera virtuella resurser “allt efter behov”(“on-demand”). Molntjänster gör det med andra ord mycket lättare för or-ganisationer att uppfylla sina målsättningar. Men det paradigmskifte, sominförandet av molntjänster innebär, skapar även säkerhetsproblem och förutsätter noggranna integritetsbedömningar. Hur bevaras det ömsesidiga förtro-endet, hur hanteras ansvarsutkrävandet, vid minskade kontrollmöjligheter tillföljd av delad information? Följaktligen behövs molnplattformar som är såkonstruerade att de kan hantera känslig information. Det krävs tekniska ochorganisatoriska hinder för att minimera risken för dataintrång, dataintrångsom kan resultera i enormt kostsamma skador såväl ekonomiskt som policymässigt. Molntjänster kan innehålla känslig information från många olikaområden och domäner. Hälsodata är ett typiskt exempel på sådan information. Det är uppenbart att de flesta människor vill att data relaterade tillderas hälsa ska vara skyddad. Så den ökade användningen av molntjänster påsenare år har medfört att kraven på integritets- och dataskydd har skärptsför att skydda individer mot övervakning och dataintrång. Exempel på skyd-dande lagstiftning är “EU Data Protection Directive” (DPD) och “US HealthInsurance Portability and Accountability Act” (HIPAA), vilka båda kräverskydd av privatlivet och bevarandet av integritet vid hantering av informa-tion som kan identifiera individer. Det har gjorts stora insatser för att utvecklafler mekanismer för att öka dataintegriteten och därmed göra molntjänsternasäkrare. Exempel på detta är; kryptering, “trusted platform modules”, säker“multi-party computing”, homomorfisk kryptering, anonymisering, container-och “sandlåde”-tekniker.Men hur man korrekt ska skapa användbara, integritetsbevarande moln-tjänster för helt säker behandling av känsliga data är fortfarande i väsentligaavseenden ett olöst problem på grund av två stora forskningsutmaningar. Fördet första: Existerande integritets- och dataskydds-lagar kräver transparensoch noggrann granskning av dataanvändningen. För det andra: Bristande kän-nedom om en rad kommande och redan existerande säkerhetslösningar för att skapa effektiva molntjänster.Denna avhandling fokuserar på utformning och utveckling av system ochmetoder för att hantera känsliga data i molntjänster på lämpligaste sätt.Målet med de framlagda lösningarna är att svara de integritetskrav som ställsi redan gällande lagstiftning, som har som uttalad målsättning att skyddaindividers integritet vid användning av molntjänster.Vi börjar med att ge en överblick av de viktigaste begreppen i molntjäns-ter, för att därefter identifiera problem som behöver lösas för säker databe-handling vid användning av molntjänster. Avhandlingen fortsätter sedan med en beskrivning av bakgrundsmaterial och en sammanfattning av befintligasäkerhets- och integritets-lösningar inom molntjänster.Vårt främsta bidrag är en ny metod för att simulera integritetshot vidanvändning av molntjänster, en metod som kan användas till att identifierade integritetskrav som överensstämmer med gällande dataskyddslagar. Vårmetod används sedan för att föreslå ett ramverk som möter de integritetskravsom ställs för att hantera data inom området “genomik”. Genomik handlari korthet om hälsodata avseende arvsmassan (DNA) hos enskilda individer.Vårt andra större bidrag är ett system för att bevara integriteten vid publice-ring av biologiska provdata. Systemet har fördelen att kunna sammankopplaflera olika uppsättningar med data. Avhandlingen fortsätter med att före-slå och beskriva ett system kallat ScaBIA, ett integritetsbevarande systemför hjärnbildsanalyser processade via molntjänster. Avhandlingens avslutan-de kapitel beskriver ett nytt sätt för kvantifiering och minimering av risk vid“kernel exploitation” (“utnyttjande av kärnan”). Denna nya ansats är ävenett bidrag till utvecklingen av ett nytt system för (Call interposition referencemonitor for Lind - the dual layer sandbox). / <p>QC 20160516</p>
260

Thin client collaborative visualizations using the distributed cloud

Hemmings, Matthew 12 July 2016 (has links)
This thesis describes the research, design, implementation, and evaluation of a collaborative visualization system that models large data sets in thin clients using the Lively Web development environment. A thin client is a computing device with light resources, depending heavily on remote computational resources for any large scale data processing. A thin client could be a cellular phone, a tablet or a laptop with insuffcient resources to perform heavy computing locally. The applications of this technology form part of a new class of application where large data sets are being visualized and collaborated on with low latency where the users are geographically separated. The primary motivation of this research is to show that large data sets can be viewed and interacted with on any device, regardless of geographic location, in collaboration with other users with no setup required by the user. In addition, it shows the strengths of the Lively Web in developing impressive thin-client visualizations in a flexible, straight-forward manner. For deployment, Lively Web servers are brought up using docker containers on the distributed cloud using virtual machines allocated by the Global Environment for Network Innovations (GENI) and Smart Applications on Virtual Infrastructure (SAVI) networks. / Graduate / 0984 / discount.yoyos@gmail.com

Page generated in 0.0634 seconds