• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Caractérisation des propriétés microphysiques des nuages et de l'interaction aérosol-nuage en Arctique à partir de mesures in-situ au sol pendant la campagne CLIMSLIP-NyA, Svalbard / Characterization of the cloud microphysical and optical properties and aerosol-cloud interaction in Arctic from in situ ground-based measurements during the CLIMSLIP-NyA campaign, Svalbard

Guyot, Gwennolé 01 June 2016 (has links)
La région arctique est particulièrement sensible au changement climatique. Aux latitudes polaires, les nuages arctiques ont un effet important sur le bilan radiatif à la surface. La première partie de ce travail est constitué de l’intercomparaison instrumentale au sol à la station PUY en Mai 2013. Les mesures ont montré une bonne corrélation entre les diamètres effectifs et les distributions en taille des gouttelettes d’eau obtenus par les instruments, mais avec des biais systématiques sur les concentrations. Ces biais ont été reliés à l’estimation du volume d’échantillonnage et nous avons donc proposé une méthode consistant à normaliser les données par rapport à un instrument qui réalise des mesures intégrées. D’autre part, le FSSP et le FM ont fait l’objet d’expériences visant à évaluer l’influence de l’angle de déviation par rapport au vent extérieur et de la vitesse du vent. La seconde partie de ce travail a pour objet la campagne de mesure qui s’est déroulée à la station du Mont-Zeppelin, Ny-Alesund, Svalbard, de Mars à Mai 2012 dans le cadre du projet CLIMSLIP. Une comparaison a été effectuée entre un cas « pollué », avec des masses d’air provenant d’Asie de l’Est et d’Europe, et un cas « propre », dont les sources d’aérosols sont majoritairement locales et ne dépassent pas l’Europe du Nord. Les résultats ont montré que le cas pollué possède des concentrations en BC, aérosols et gouttes plus élevées, un mode accumulation plus important, un diamètre de gouttes plus faible et une fraction d’activation plus élevée. Enfin, le premier et le second effet indirect des aérosols ont pu être quantifiés. / The arctic region is especially sensitive to climate change. At high latitudes, arctic clouds have an important effect on the surface radiative budget. The first part of this work consists in a ground based cloud instrumentation intercomparison in the PUY station in May 2013. The measurements showed a good correlation between the effective diameters and the droplet size distributions obtained by the instruments, but with a systematical bias on the concentrations. These biases have been relied to the assessment of the sampling volume and we thus proposed a methodology to standardize the data according to an ensemble of particles probe. Moreover, the FSSP and the FM have been the subject of experiments to assess the influence of the deflection angle according to exterior wind and the wind speed. The second part of this work is about the measurement campaign at the Mount-Zeppelin station, Ny-Alesund, Svalbard, from March to May 2012 in the frame of the CLIMSLIP project. A comparison has been performed between a « polluted » case, with air masses coming from East Asia and Europe, and a « clean » case, where the aerosol sources are predominantly local and do not exceed the northern Europe. The results showed that the polluted case possessed higher concentrations in BC, aerosols and drops, an accumulation mode more important, weaker droplet diameters and higher activation fraction. Finally, the first and second aerosol indirect effects have been quantified.
532

SMASH: Survey of the MAgellanic Stellar History

Nidever, David L., Olsen, Knut, Walker, Alistair R., Vivas, A. Katherina, Blum, Robert D., Kaleida, Catherine, Choi, Yumi, Conn, Blair C., Gruendl, Robert A., Bell, Eric F., Besla, Gurtina, Muñoz, Ricardo R., Gallart, Carme, Martin, Nicolas F., Olszewski, Edward W., Saha, Abhijit, Monachesi, Antonela, Monelli, Matteo, de Boer, Thomas J. L., Johnson, L. Clifton, Zaritsky, Dennis, Stringfellow, Guy S., van der Marel, Roeland P., Cioni, Maria-Rosa L., Jin, Shoko, Majewski, Steven R., Martinez-Delgado, David, Monteagudo, Lara, Noël, Noelia E. D., Bernard, Edouard J., Kunder, Andrea, Chu, You-Hua, Bell, Cameron P. M., Santana, Felipe, Frechem, Joshua, Medina, Gustavo E., Parkash, Vaishali, Navarrete, J. C. Serón, Hayes, Christian 25 October 2017 (has links)
The Large and Small Magellanic Clouds are unique local laboratories for studying the formation and evolution of small galaxies in exquisite detail. The Survey of the MAgellanic Stellar History (SMASH) is an NOAO community Dark Energy Camera (DECam) survey of the Clouds mapping 480 deg2 (distributed over similar to 2400 square degrees at similar to 20% filling factor) to similar to 24th. mag in ugriz. The primary goals of SMASH are to identify low surface brightness stellar populations associated with the stellar halos and tidal debris of the Clouds, and to derive spatially resolved star formation histories. Here, we present a summary of the survey, its data reduction, and a description of the first public Data Release (DR1). The SMASH DECam data have been reduced with a combination of the NOAO Community Pipeline, the PHOTRED automated point-spread-function photometry pipeline, and custom calibration software. The astrometric precision is similar to 15 mas and the accuracy is similar to 2 mas with respect to the Gaia reference frame. The photometric precision is similar to 0.5%-0.7% in griz and similar to 1% in u with a calibration accuracy of similar to 1.3% in all bands. The median 5s point source depths in ugriz are 23.9, 24.8, 24.5, 24.2, and 23.5 mag. The SMASH data have already been used to discover the Hydra II Milky Way satellite, the SMASH 1 old globular cluster likely associated with the LMC, and extended stellar populations around the LMC out to R. similar to. 18.4 kpc. SMASH DR1 contains measurements of similar to 100 million objects distributed in 61 fields. A prototype version of the NOAO Data Lab provides data access and exploration tools.
533

Virtualized Reconfigurable Resources and Their Secured Provision in an Untrusted Cloud Environment

Genßler, Paul R. 09 January 2018 (has links) (PDF)
The cloud computing business grows year after year. To keep up with increasing demand and to offer more services, data center providers are always searching for novel architectures. One of them are FPGAs, reconfigurable hardware with high compute power and energy efficiency. But some clients cannot make use of the remote processing capabilities. Not every involved party is trustworthy and the complex management software has potential security flaws. Hence, clients’ sensitive data or algorithms cannot be sufficiently protected. In this thesis state-of-the-art hardware, cloud and security concepts are analyzed and com- bined. On one side are reconfigurable virtual FPGAs. They are a flexible resource and fulfill the cloud characteristics at the price of security. But on the other side is a strong requirement for said security. To provide it, an immutable controller is embedded enabling a direct, confidential and secure transfer of clients’ configurations. This establishes a trustworthy compute space inside an untrusted cloud environment. Clients can securely transfer their sensitive data and algorithms without involving vulnerable software or a data center provider. This concept is implemented as a prototype. Based on it, necessary changes to current FPGAs are analyzed. To fully enable reconfigurable yet secure hardware in the cloud, a new hybrid architecture is required. / Das Geschäft mit dem Cloud Computing wächst Jahr für Jahr. Um mit der steigenden Nachfrage mitzuhalten und neue Angebote zu bieten, sind Betreiber von Rechenzentren immer auf der Suche nach neuen Architekturen. Eine davon sind FPGAs, rekonfigurierbare Hardware mit hoher Rechenleistung und Energieeffizienz. Aber manche Kunden können die ausgelagerten Rechenkapazitäten nicht nutzen. Nicht alle Beteiligten sind vertrauenswürdig und die komplexe Verwaltungssoftware ist anfällig für Sicherheitslücken. Daher können die sensiblen Daten dieser Kunden nicht ausreichend geschützt werden. In dieser Arbeit werden modernste Hardware, Cloud und Sicherheitskonzept analysiert und kombiniert. Auf der einen Seite sind virtuelle FPGAs. Sie sind eine flexible Ressource und haben Cloud Charakteristiken zum Preis der Sicherheit. Aber auf der anderen Seite steht ein hohes Sicherheitsbedürfnis. Um dieses zu bieten ist ein unveränderlicher Controller eingebettet und ermöglicht eine direkte, vertrauliche und sichere Übertragung der Konfigurationen der Kunden. Das etabliert eine vertrauenswürdige Rechenumgebung in einer nicht vertrauenswürdigen Cloud Umgebung. Kunden können sicher ihre sensiblen Daten und Algorithmen übertragen ohne verwundbare Software zu nutzen oder den Betreiber des Rechenzentrums einzubeziehen. Dieses Konzept ist als Prototyp implementiert. Darauf basierend werden nötige Änderungen von modernen FPGAs analysiert. Um in vollem Umfang eine rekonfigurierbare aber dennoch sichere Hardware in der Cloud zu ermöglichen, wird eine neue hybride Architektur benötigt.
534

An SDN-based Framework for QoSaware Mobile Cloud Computing

Ekanayake Mudiyanselage, Wijaya Dheeshakthi January 2016 (has links)
In mobile cloud computing (MCC), rich mobile application data is processed at the cloud infrastructure by reliving resource limited mobile devices from computationally complex tasks. However, due to the ubiquitous and mobility nature, providing time critical rich applications over remote cloud infrastructure is a challenging task for mobile application service providers. Therefore, according to the literature, close proximity placement of cloud services has been identified as a way to achieve lower end-to-end access delay and thereby provide a higher quality of experience (QoE) for rich mobile application users. However, providing a higher Quality of Service (QoS) with mobility is still a challenge within close proximity clouds. Access delay to a closely placed cloud tends to be increased over time when users move away from the cloud. However, reactive resource relocation mechanism proposed in literature does not provide a comprehensive mechanism to guarantee the QoS and as well as to minimize service provisioning cost for mobile cloud service providers. As a result, using the benefits of SDN and the data plane programmability with logically centralized controllers, a resource allocation framework was proposed for IaaS mobile clouds with regional datacenters. The user mobility problem was analyzed within SDN-enabled wireless networks and addressed the possible service level agreement violations that could occur with inter-regional mobility. The proposed framework is composed of an optimization algorithm to provide seamless cloud service during user mobility. Further a service provisioning cost minimization criteria was considered during an event of resource allocation and inter-regional user mobility.
535

An SDN Assisted Framework for Mobile Ad-hoc Clouds

Balasubramanian, Venkatraman January 2017 (has links)
Over a period of time, it has been studied that a mobile “edge-cloud” formed by hand-held devices could be a productive resource entity for providing a service in the mobile cloud landscape. The ease of access to a pool of devices is much more arbitrary and based purely on the needs of the user. This pool can act as a provider of an infrastructure for various services that can be processed with volunteer node participation, where the node in the vicinity is itself a service provider. This representation of cloud formation to engender a constellation of devices in turn providing a service is the basis for the concept of Mobile Ad-hoc Cloud Computing. In this thesis, an architecture is designed for providing an Infrastructure as a service in Mobile Ad-hoc Cloud Computing. The performance evaluation reveals the gain in execution time while offloading to the mobile ad-hoc cloud. Further, this novel architecture enables discovering a dedicated pool of volunteer devices for computation. An optimized task scheduling algorithm is proposed that provides a coordinated resource allocation. However, failure to maintain the service between heterogeneous networks shows the inability of the present day networks to adapt to frequent changes in a network. Thus, owing to the heavy dependence on the centralized mobile network, the service related issues in a mobile ad-hoc cloud needs to be addressed. As a result, using the principles of Software Defined Networking (SDN), a disruption tolerant Mobile Ad-hoc Cloud framework is proposed. To evaluate this framework a comprehensive case study is provided in this work that shows a round trip time improvement using an SDN controller.
536

La construction des Business Models des fournisseurs de services d'infrastructure Cloud Computing (IaaS) / Building "Infrastructure as a Service" (IaaS) providers Business Models

Leon, Franck 24 March 2015 (has links)
L’émergence du Cloud Computing change le paysage des infrastructures qui soutiennent les systèmes informatiques. L’originalité du Cloud Computing réside avant tout dans l’offre d’un nouveau mode de consommation proposé aux clients: les ressources informatiques en tant que service à la demande. Les fournisseurs de "hardware" et de "software" qui ont historiquement fondé leurs revenus sur la vente de produits matériels et de licences logiciels ont fait face à un changement de leurs modèles de revenus, et donc à considérer de nouveau Business Models. Ce travail révèle que les fournisseurs de services d’infrastructure Cloud Computing se définissent comme étant des opérateurs Cloud. Ils ont un rôle d’agrégateur de service et proposent des services d’infrastructures fonctionnelles, disponible à la demande et accessible à distance. Ces fournisseurs construisent un écosystème de partenaires-fournisseurs et un écosystème de partenaires-produits pour accroitre la valeur ajoutée globale. La garantie de niveau de service (SLA) devient l’objet de la transaction entre le fournisseur et le client. Ce dernier se décharge de toutes les problématiques techniques, et les transfère au fournisseur lors de la signature du contrat. Lors de la fixation des prix, une hypothèse de taux d’usage est prise en compte et sera à la base des calculs des coûts. Nous proposons alors trois leviers d’actions aux fournisseurs d’infrastructure Cloud Computing pour accroître leur part de valeur ajoutée : (1) la baisse des coûts par l’innovation technologique, (2) la capacité d’attirer et de garder les clients pour avoir un taux d’usage élevé, et (3) le développement d’un écosystème de services. / The emergence of cloud computing is changing the landscape of the infrastructure supporting IT systems. The originality of cloud computing lies primarily in the offer of a new consumption mode available to consumers: IT resources as a service provided on demand. IT providers that have based their income from products sales (hardware and software licenses) faced a change in their revenue models, and thus have to consider new business models. This research reveals that IaaS provider is defined as cloud operators. They are service aggregator and offer functional infrastructure available on demand available over the network. IaaS providers are building a supplier-partners and product-partners ecosystems to increase the overall value. When consumers sign contract, they declaim all technical issues by transferring them to providers. Service Level Agreement (SLA) becomes the object of transaction between providers and consumers. For the pricing process, rate of use is the basis of cost assessment. We suggest three leverage to IaaS providers in order to increase their share of value added: (1) lower costs through technological innovation, (2) ability to attract and retain customers to have high rate of use, and (3) development of services ecosystem.
537

Managing consistency for big data applications : tradeoffs and self-adaptiveness / Gérer la cohérence pour les applications big data : compromis et auto-adaptabilité

Chihoub, Houssem Eddine 10 December 2013 (has links)
Dans l’ère de Big Data, les applications intensives en données gèrent des volumes de données extrêmement grand. De plus, ils ont besoin de temps de traitement rapide. Une grande partie de ces applications sont déployées sur des infrastructures cloud. Ceci est afin de bénéficier de l’élasticité des clouds, les déploiements sur demande et les coûts réduits strictement relatifs à l’usage. Dans ce contexte, la réplication est un moyen essentiel dans le cloud afin de surmonter les défis de Big Data. En effet, la réplication fournit les moyens pour assurer la disponibilité des données à travers de nombreuses copies de données, des accès plus rapide aux copies locales, la tolérance aux fautes. Cependant, la réplication introduit le problème majeur de la cohérence de données. La gestion de la cohérence est primordiale pour les systèmes de Big Data. Les modèles à cohérence forte présentent de grandes limitations aux aspects liées aux performances et au passage à l’échelle à cause des besoins de synchronisation. En revanche, les modèles à cohérence faible et éventuelle promettent de meilleures performances ainsi qu’une meilleure disponibilité de données. Toutefois, ces derniers modèles peuvent tolérer, sous certaines conditions, trop d’incohérence temporelle. Dans le cadre du travail de cette thèse, on s'adresse particulièrement aux problèmes liés aux compromis de cohérence dans les systèmes à large échelle de Big Data. Premièrement, on étudie la gestion de cohérence au niveau du système de stockage. On introduit un modèle de cohérence auto-adaptative (nommé Harmony). Ce modèle augmente et diminue de manière automatique le niveau de cohérence et le nombre de copies impliquées dans les opérations. Ceci permet de fournir de meilleures performances toute en satisfaisant les besoins de cohérence de l’application. De plus, on introduit une étude détaillée sur l'impact de la gestion de la cohérence sur le coût financier dans le cloud. On emploi cette étude afin de proposer une gestion de cohérence efficace qui réduit les coûts. Dans une troisième direction, on étudie les effets de gestion de cohérence sur la consommation en énergie des systèmes de stockage distribués. Cette étude nous mène à analyser les gains potentiels des reconfigurations adaptatives des systèmes de stockage en matière de réduction de la consommation. Afin de compléter notre travail au niveau système de stockage, on s'adresse à la gestion de cohérence au niveau de l’application. Les applications de Big Data sont de nature différente et ont des besoins de cohérence différents. Par conséquent, on introduit une approche de modélisation du comportement de l’application lors de ses accès aux données. Le modèle résultant facilite la compréhension des besoins en cohérence. De plus, ce modèle est utilisé afin de délivrer une cohérence customisée spécifique à l’application. / In the era of Big Data, data-intensive applications handle extremely large volumes of data while requiring fast processing times. A large number of such applications run in the cloud in order to benefit from cloud elasticity, easy on-demand deployments, and cost-efficient Pays-As-You-Go usage. In this context, replication is an essential feature in the cloud in order to deal with Big Data challenges. Therefore, replication therefore, enables high availability through multiple replicas, fast data access to local replicas, fault tolerance, and disaster recovery. However, replication introduces the major issue of data consistency across different copies. Consistency management is a critical for Big Data systems. Strong consistency models introduce serious limitations to systems scalability and performance due to the required synchronization efforts. In contrast, weak and eventual consistency models reduce the performance overhead and enable high levels of availability. However, these models may tolerate, under certain scenarios, too much temporal inconsistency. In this Ph.D thesis, we address this issue of consistency tradeoffs in large-scale Big Data systems and applications. We first, focus on consistency management at the storage system level. Accordingly, we propose an automated self-adaptive model (named Harmony) that scale up/down the consistency level at runtime when needed in order to provide as high performance as possible while preserving the application consistency requirements. In addition, we present a thorough study of consistency management impact on the monetary cost of running in the cloud. Hereafter, we leverage this study in order to propose a cost efficient consistency tuning (named Bismar) in the cloud. In a third direction, we study the consistency management impact on energy consumption within the data center. According to our findings, we investigate adaptive configurations of the storage system cluster that target energy saving. In order to complete our system-side study, we focus on the application level. Applications are different and so are their consistency requirements. Understanding such requirements at the storage system level is not possible. Therefore, we propose an application behavior modeling that apprehend the consistency requirements of an application. Based on the model, we propose an online prediction approach- named Chameleon that adapts to the application specific needs and provides customized consistency.
538

Business Intelligence v prostředí Cloudu / Business Intelligence in the Cloud

Náhlovský, Tomáš January 2015 (has links)
This master thesis deals with Business Intelligence in Cloud computing environment and comparing some of available solutions, that are currently offered on the market. The theoretical part focuses on definition of Business Intelligence and Cloud computing terms and combination thereof, including a description of the components, functionalities and technologies that use. The theoretical part describes, among other, the current trends from the world of Business Intelligence. The practical part is then focused on comparing traditional approach to Business Intelligence solutions and solutions in the Cloud. It describes the components of the cloud and migration process from traditional solution to the Cloud. The practical part includes an analysis of the current market for Business Intelligence, selection of three providers based on analysis and a description of their solutions and evaluation of these tools. The basic objectives and benefits of this master thesis include a general definition of the concepts of BI and analysis of available solutions, including comparisons.
539

Analýza použitelnosti cloud computingu pro práci na dálku / Analysis of usableness of cloud computingu for teleworking

Pospíšil, Václav January 2012 (has links)
Cloud Computing has become a serious participant in ICT in past few years. Technical and economic attributes of Cloud Computing can change a forecast in an organization. These benefits are investigated and putted into large picture of teleworking. In critical moment there is flexibility, which is crucial either for the organization or for the individual. How much is flexibility important is illustrated on the example of organization, that is considering whether the cloud computing and teleworking is the right way for the their future.
540

Towards auto-scaling in the cloud

Yazdanov, Lenar 16 January 2017 (has links) (PDF)
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains. Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives.

Page generated in 0.0512 seconds