• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

A dynamic middleware to integrate multiple cloud infrastructures with remote apllications

Bhattacharjee, Tirtha Pratim 04 December 2014 (has links)
In an era with compelling need for greater computation power, the aggregation of software system components is becoming more challenging and diverse. The new-generation scientific applications are growing hub of complex and intense computation performed on huge data set with exponential growth. With the development of parallel algorithms, design of multi-user web applications and frequent changes in software architecture, there is a bigger challenge lying in front of the research institutes and organizations. Network science is an interesting field posing extreme computation demands to sustain complex large-scale networks. Several static or dynamic network analysis have to be performed through algorithms implementing complex graph theories, statistical mechanics, data mining and visualization. Similarly, high performance computation infrastructures are imbibing multiple characters and expanding in an unprecedented way. In this age, it's mandatory for all software solutions to migrate to scalable platforms and integrate cloud enabled data center clusters for higher computation needs. So, with aggressive adoption of cloud infrastructures and resource-intensive web-applications, there is a pressing need for a dynamic middleware to bridge the gap and effectively coordinate the integrated system. Such a heterogeneous environment encourages the devising of a transparent, portable and flexible solution stack. In this project, we propose adoption of Virtual Machine aware Portable Batch System Cluster (VM-aware PBS Cluster), a self-initiating and self-regulating cluster of Virtual Machines (VM) capable of operating and scaling on any cloud infrastructure. This is an unique but simple solution for large-scale softwares to migrate to cloud infrastructures retaining the most of the application stack intact. In this project, we have also designed and implemented Cloud Integrator Framework, a dynamic implementation of cloud aware middleware for the proposed VM-aware PBS cluster. This framework regulates job distribution in an aggregate of VMs and optimizes resource consumption through on-demand VM initialization and termination. The model was integrated into CINET system, a network science application. This model has enabled CINET to mediate large-scale network analysis and simulation tasks across varied cloud platforms such as OpenStack and Amazon EC2 for its computation requirements. / Master of Science
452

Formation of the Cloud: History, Metaphor, and Materiality

Croker, Trevor D. 14 January 2020 (has links)
In this dissertation, I look at the history of cloud computing to demonstrate the entanglement of history, metaphor, and materiality. In telling this story, I argue that metaphors play a powerful role in how we imagine, construct, and maintain our technological futures. The cloud, as a metaphor in computing, works to simplify complexities in distributed networking infrastructures. The language and imagery of the cloud has been used as a tool that helps cloud providers shift public focus away from potentially important regulatory, environmental, and social questions while constructing a new computing marketplace. To address these topics, I contextualize the history of the cloud by looking back at the stories of utility computing (1960s-70s) and ubiquitous computing (1980s-1990s). These visions provide an alternative narrative about the design and regulation of new technological systems. Drawing upon these older metaphors of computing, I describe the early history of the cloud (1990-2008) in order to explore how this new vision of computing was imagined. I suggest that the metaphor of the cloud was not a historical inevitability. Rather, I argue that the social-construction of metaphors in computing can play a significant role in how the public thinks about, develops, and uses new technologies. In this research, I explore how the metaphor of the cloud underplays the impact of emerging large-scale computing infrastructures while at the same time slowly transforming traditional ownership-models in digital communications. Throughout the dissertation, I focus on the role of materiality in shaping digital technologies. I look at how the development of the cloud is tied to the establishment of cloud data centers and the deployment of global submarine data cables. Furthermore, I look at the materiality of the cloud by examining its impact on a local community (Los Angeles, CA). Throughout this research, I argue that the metaphor of the cloud often hides deeper socio-technical complexities. Both the materials and metaphor of the cloud work to make the system invisible. By looking at the material impact of the cloud, I demonstrate how these larger economic, social, and political realities are entangled in the story and metaphor of the cloud. / Doctor of Philosophy / This dissertation tells the story of cloud computing by looking at the history of the cloud and then discussing the social and political implications of this history. I start by arguing that the cloud is connected to earlier visions of computing (specifically, utility computing and ubiquitous computing). By referencing these older histories, I argue that much of what we currently understand as cloud computing is actually connected to earlier debates and efforts to shape a computing future. Using the history of computing, I demonstrate the role that metaphor plays in the development of a technology. Using these earlier histories, I explain how cloud computing was coined in the 1990s and eventually became a dominant vision of computing in the late 2000s. Much of the research addresses how the metaphor of the cloud is used, the initial reaction to the idea of the cloud, and how the creation of the cloud did (or did not) borrow from older visions of computing. This research looks at which people use the cloud, how the cloud is marketed to different groups, and the challenges of conceptualizing this new distributed computing network. This dissertation gives particular weight to the materiality of the cloud. My research focuses on the cloud's impact on data centers and submarine communication data cables. Additionally, I look at the impact of the cloud on a local community (Los Angeles, CA). Throughout this research, I argue that the metaphor of the cloud often hides deeper complexities. By looking at the material impact of the cloud, I demonstrate how larger economic, social, and political realities are entangled in the story and metaphor of the cloud.
453

From e-government to cloud-government: challenges of Jordanian citizens’ acceptance for public services

Alkhwaldi, Abeer F.A.H., Kamala, Mumtaz A., Qahwaji, Rami S.R. 10 May 2018 (has links)
Yes / On the inception of the third millennium, there is much evidence that cloud technologies have become the strategic trend for many governments, not only for developed countries (e.g. the UK, Japan and the USA), but also developing countries (e.g. Malaysia and countries in the Middle East region). These countries have launched cloud computing movements for enhanced standardization of IT resources, cost reduction and more efficient public services. Cloud-based e-government services are considered to be one of the high priorities for government agencies in Jordan. Although experiencing phenomenal evolution, government cloud-services are still suffering from the adoption challenges of e-government initiatives (e.g. technological, human, social and financial aspects) which need to be considered carefully by governments contemplating their implementation. While e-government adoption from the citizens’ perspective has been extensively investigated using different theoretical models, these models have not paid adequate attention to security issues. This paper presents a pilot study to investigate citizens’ perceptions of the extent to which these challenges inhibit the acceptance and use of cloud computing in the Jordanian public sector and examine the effect of these challenges on the security perceptions of citizens. Based on the analysis of data collected from online surveys, some important challenges were identified. The results can help to guide successful acceptance of cloud-based e-government services in Jordan.
454

Utilization-adaptive Memory Architectures

Panwar, Gagandeep 14 June 2024 (has links)
DRAM contributes significantly to a server system's cost and global warming potential. To make matters worse, DRAM density scaling has not kept up with the scaling in logic and storage technologies. An effective way to reduce DRAM's monetary and environmental cost is to increase its effective utilization and extract the best possible performance in all utilization scenarios. To this end, this dissertation proposes Utilization-adaptive Memory Architectures that enhance the memory controller with the ability to adapt to current memory utilization and implement techniques to boost system performance. These techniques fall under two categories: (i) The techniques under Utilization-adaptive Hardware Memory Replication target the scenario where memory is underutilized and aim to boost performance versus a conventional system without replication, and (ii) The techniques under Utilization-adaptive Hardware Memory Compression target the scenario where memory utilization is high and aim to significantly increase memory capacity while closing the performance gap versus a conventional system that has sufficient memory and does not require compression. / Doctor of Philosophy / A computer system's memory stores information for the system's immediate use (e.g., data and instructions for in-use programs). The performance and capacity of the dominant memory technology – Dynamic Random Access Memory (DRAM) – has not kept up with advancements in computing devices such as CPUs. Furthermore, DRAM significantly contributes to a server's carbon footprint because a server can have over a thousand DRAM chips – substantially more than any other type of chip. DRAM's manufacturing cycle and lifetime energy use make it the most carbon-unfriendly component on today's servers. To reduce the environmental impact of DRAM, an intuitive way is to increase its utilization. To this end, this dissertation explores Utilization-adaptive Memory Architectures which enable the memory controller to adapt to the system's current memory through a variety of techniques such as: (i) Utilization-adaptive Hardware Memory Replication which copies in-use data to free memory and uses the extra copy to improve performance, and (ii) Utilization-adaptive Hardware Memory Compression which uses dense representation for data to save memory and allows the system to run applications that require more memory than the physically installed memory. Compared to conventional systems that do not feature these techniques, these techniques improve performance for different memory utilization scenarios ranging from low to high.
455

Design of robust, malleable arithmetic units using lookup tables

Raudies, Florian January 2014 (has links)
Thesis (M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Cloud computing demands the reconfigurability on a sub-core basis to maximize the performance per customer application and the overall utilization of hardware resources in a data center. We propose the design of arithmetic units (AUs) using look-up tables (LUTs), which can also function as cache units. We imagine such LUT-based implementations of AUs and caches to be part of a malleable computing paradigm that allows the re-configuration of the core architecture inside a core and across cores. Our envisioned malleable computing can configure an LUT to behave as an AU or a cache at run time depending on the customers, their application requirements, and the computational demand in a data-center. To evaluate the scope for reconfigurability of LUTs, we determined the exchange rate between caches and AUs. This exchange rate tells us the cost of designing a LUT-based AU in kilo bytes of cache. In this thesis, we provide exchange rates for LUT-based adder and multiplier designs. For our analysis, we use CACTI 6.5 to estimate the access time, area, and power of caches varying in size, number of banks, and set associativity, which we fitted by multinomial models. The delay time of these LUT-based designs is comparable to that of logic gate based designs of AUs using the logical effort theory for scaling. As delay time for LUT-based AUs we get 0.5 ns to 1.5 ns (2 GHz to 667 MHz) using the 45 nm Nangate open cell library. The cost of an adder ranges from 0.125 kB to 5 kB cache size. The cost for an multiplier ranges from 2.7 kB to 2.8 kB cache size. The area for these LUT-based designs is smaller or equal compared to logic gate based adder and multiplier designs. Using RRAM technology the area can be reduced by two orders of magnitude with a slowdown in delay time by one order of magnitude. We also compared the robustness of our LUT-based adder and multiplier designs to logic gate equivalent adder and multiplier designs in presence of soft errors using analytical models and simulations. We show that LUT-based designs are more resilient toward soft errors when comparing output error rates of AUs. Our analytical models can help design robust AUs by quantifying design patterns in terms of their robustness. / 2999-01-01
456

Aerosol-cloud-precipitation interactions

Gryspeerdt, Edward January 2013 (has links)
Aerosols are thought to have a large effect on the climate, especially through their interactions with clouds. The magnitude and in some cases the sign of aerosol effects on cloud and precipitation are highly uncertain. Part of the uncertainty comes from the multiple competing effects that aerosols have been proposed to have on cloud properties. In addition, covariation of clouds and aerosol properties with changing meteorological conditions has the ability to generate spurious correlations between cloud and aerosol properties. This work presents a new way to investigate aerosol-cloud-precipitation interactions while accounting for the influence of meteorology on cloud and aerosol. The clouds are separated into cloud regimes, which have similar retrieved cloud properties, to investigate the regime dependence of aerosol-cloud-precipitation interactions. The strong aerosol optical depth (AOD)- cloud fraction (CF) correlation is shown to have the ability to generate spurious correlations. The AOD-CF correlation is accounted for by investigating the frequency of transitions between cloud regimes in different aerosol environments. This time-dependent analysis is also extended to investigate the development of precipitation from each of the regimes as a function of their aerosol environment. A modification of the regime transition frequencies consistent with an increase in stratocumulus persistence over ocean is found with increasing AI (aerosol index). Increases in transitions into the deep convective regime and in the precipitation rate consistent with an aerosol invigoration effect are also found over land. Comparisons to model output suggest that a large fraction of the observed effect on the stratocumulus persistence may be due to aerosol indirect effects. The model is not able to reproduce the observed effects on convective cloud, most likely due to the lack of parametrised effects of aerosol on convection. The magnitude of these effects is considerably smaller than correlations found by previous studies, emphasising the importance of meteorological covariation on observed aerosol-cloud-precipitation interactions.
457

Evaluation und Benchmarking von Sync ,n' Share Lösungen am Beispiel der TU Chemnitz

Unger, Robert 04 May 2016 (has links) (PDF)
Um Mitarbeitern und Studenten der Technischen Universität Chemnitz eine Plattform zu bieten, zur Kooperation und zum Synchronisieren ihre Daten mit heterogenen Endgeräten, ist der Einsatz einer Cloud Anwendung nötig. Aus Gründen des deutschen Datenschutzes, kann nur eine lokale Implementation in Betracht gezogen werden, für die eine Vielzahl adäquater Produkte verfügbar ist. Um eine fundierte Entscheidung treffen zu können, bedarf es der qualitativen Evaluation und Eingrenzung der entsprechenden Lösungen. Darüber hinaus ist die Performance und Skalierbarkeit in Bezug auf die Größenordnungen der Endanwender zu beachten. Gemeinsam mit den Mitarbeitern des Universitätsrechenzentrums der TU Chemnitz wurden die Mindestanforderungen an einen Cloud-Dienst in einem Lastheft festgehalten, in das auch Wünsche von Erstanwendern mit eingeflossen sind. Anhand dessen sollen die Anwendungen untersucht und Schlussfolgerungen auf ihre Verwendbarkeit gemacht werden. Zur Generierung von quantitativen Ergebnissen, werden unterschiedliche Größen von Benutzerinteraktionen simuliert. Hierzu wird das Benchmark System Apache JMeter eingesetzt. Die zu benutzenden Testsets sind selbst zu erzeugen und sollen das Benutzerverhalten möglichst authentisch nachbilden. Wenn möglich, sind optimierte Konfigurationen aus den gewonnenen Erkenntnissen abzuleiten.
458

Etude et mise en oeuvre d'une architecture pour l'authentification et la gestion de documents numériques certifiés : application dans le contexte des services en ligne pour le grand public / Study and implementation of architecture for the authentification and the management of certified digital documents : application in the context of the on-line services for the general public

Abakar, Mahamat Ahmat 22 November 2012 (has links)
Dans un environnement ouvert tel que l'Internet, les interlocuteurs sont parfois inconnus et toujours dématérialisés. Les concepts et les technologies de la confiance numérique et de la sécurité informatique doivent se combiner pour permettre un contrôle d'accès en environnement ouvert. Dans nos travaux, nous nous proposons d'étudier les concepts majeurs de cette problématique, puis de concevoir, et enfin de développer un système fonctionnel, basé sur des standards du contrôle d'accès, pour un environnement ouvert et appliqué à l'Internet. Plus précisément, notre étude consiste à mettre en œuvre une architecture de contrôle d'accès basée sur la confiance numérique. L'élément central de cette architecture est l'environnement utilisateur très riche et déployé en ligne. Cet environnement est doté de trois modules principaux qui permettent à l'utilisateur de mener à bien ses transactions. Ces modules sont le module d'analyse de règlements, le module de récupération de données et le module de validation de règlements. Nous avons élaborés des algorithmes utilisés dans ces modules. L'usage est le suivant. L'utilisateur demande un service à un fournisseur de services, celui-ci analyse la requête de l'utilisateur et extrait le règlement à partir de la base des règles de contrôle d'accès. Cette architecture est conçue à l'aide de modèles de contrôle d'accès basé sur les attributs et le langage XACML. Ce règlement contient des conditions à satisfaire par l'utilisateur pour obtenir le droit d'accès à la ressource demandée. Le module d'analyse de règlement permet à l'utilisateur d'analyser le règlement reçu du fournisseur de service. Cette analyse consiste à vérifier à l'aide d'un algorithme la disponibilité de ses informations auprès de ses sources d'information d'identité de confiance pour le fournisseur de services. Le module de récupération de données permet ensuite à l'utilisateur de récupérer ses certificats. Le module de validation lui permet de tester qu'il satisfait le règlement grâce aux certificats. Si le règlement est satisfait l'utilisateur diffuse ses certificats au fournisseur de service. La conception de ce système repose sur un ensemble de brique technologiques étudiées et décrites dans ces travaux. Ce document débute par une étude des différents cas d'usage dans le domaine des transactions en ligne. Cette étude permet de mettre en évidence la problématique de la gestion des identités numériques en environnement ouvert. Les organisations virtuelles, la notion de partenariat et la confiance sont des éléments clés qui entrent dans la conception des systèmes de contrôle d'accès basé sur la confiance. Une première étude d'un ensemble de modèles de contrôle d'accès nous permet de dégager le modèle ABAC et le langage XACML pour la conception de notre système. Dans un second temps, nous concevons le modèle de données de notre système de contrôle d'accès distribué et nous présentons et évaluons les algorithmes clés. Ensuite, nous concevons une architecture protocolaire satisfaisant les besoins d'interopérabilité entre les différentes entités impliquées. Il s'agit de protocoles permettant d'établir une session auprès d'un système, permettant de véhiculer un règlement de contrôle d'accès et permettant d'obtenir et de diffuser des informations entre tiers de confiance. La dernière partie est consacrée à l'implémentation réalisée en langage python et en utilisant le « framework » de développement Web Django / In an open environment such as the Internet, the interlocutors are sometimes unknown and always dematerialized. The concepts and the technologies of the digital confidence and the IT security have to harmonize to allow an access control in open environment. In our works, we suggest studying the major concepts of this problem, then designing, and finally developing a functional system, based on standards of the access control, for an environment open and applied to the Internet. More exactly, our study consists in implementing architecture of access control based on the digital confidence. The central element of this architecture is the on-line very rich and spread user environment. This environment is endowed with three main modules which allow the user to bring to a successful conclusion his transactions. These modules are the module of analysis of regulations, the module of data recovery and the module of validation of regulations. We developed algorithms used in these modules. The use is the following one. The user asks for a service in a service provider, this one analyzes the request of the user and extracts the regulation from the basis of the rules of access control. This architecture is designed by means of models of access control based on the attributes and the language XACML. This payment contains conditions to be satisfied by the user to obtain the access right in the wanted resource. The module of analysis of payment allows the user to analyze the regulation received from the supplier of service. This analysis consists in verifying by means of an algorithm the availability of its information with its information sources of reliable identity for the service provider. The module of data recovery allows then the user to get back its certificates. The module of validation allows him to test that it satisfies the payment thanks to certificates. If the payment is satisfied the user spreads his certificates to the supplier of service. The design of this system rests on a set of brick technological studied and described in these works. This document begins with a study of the various cases of use in the field of the on-line transactions. This study allows to highlight the problem of the management of the digital identities in open environment. The virtual organizations, the notion of partnership and the confidence are key elements which enter the conception of the systems of access control based on the confidence. A first study of a set of models of access control allows us to clear the model ABAC and the language XACML for the design of our system. Secondly, we conceive the model of data of our system of distributed access control and we present and estimate the key algorithms. Then, we conceive formal architecture satisfying the needs for interoperability between the various implied entities. It is about protocols allowing to establish a session with a system, allowing to convey a payment of access control and allowing to obtain and to spread information between trusted third party. The last part is dedicated to the implementation realized in language Python and by using the "framework" of Web development Django
459

Mise en oeuvre d’une plateforme de gestion et de dissémination des connaissances pour des réseaux autonomiques / A knowledge management and dissemination platform for autonomic networks

Souihi, Sami 03 December 2013 (has links)
La croissance du réseau Internet, l'émergence de nouveaux besoins par l'avènement des terminaux dits intelligents (smartphones, tablettes tactiles, etc.) et l'apparition de nouvelles applications sous-jacentes induisent de nombreuses mutations dans l'usage de plus en plus massif des technologies de l'information dans notre vie quotidienne et dans tous les secteurs d'activités. Ces nouveaux usages ont nécessité de repenser le fondement même de l'architecture réseau qui a eu pour conséquence l'émergence de nouveaux concepts basés sur une vue "centrée sur l'usage" en lieu et place d'une vue "centrée sur le réseau". De fait, les mécanismes de contrôle du réseau de transport doivent non seulement exploiter les informations relatives aux plans de données, de contrôle et de gestion, mais aussi les connaissances, acquises ou apprises par inférence déductive ou inductive, sur l'état courant du réseau (trafic, ressources, rendu de l'application, etc.) de manière à accélérer la prise de décision par les éléments de contrôle du réseau. Les travaux faits dans le cadre de cette thèse concernent ce dernier aspect et rejoignent plus généralement ceux tournés sur les réseaux autonomiques. Il s'agit dans cette thèse de mettre en oeuvre des méthodes relatives à la gestion, à la distribution et à l'exploitation des connaissances nécessaires au bon fonctionnement du réseau de transport. Le plan de connaissances mis en oeuvre ici se base à la fois sur l'idée de développer une gestion au sein d'une structure hiérarchisée et adaptative où seuls certains noeuds sélectionnés sont en charge de la dissémination des connaissances et l'idée de relier ces noeuds au travers d'un ensemble de réseaux couvrants spécialisés permettant de faciliter l'exploitation de ces connaissances. Comparée aux plateformes traditionnellement utilisées, celle développée dans le cadre de cette thèse montre clairement l'intérêt des algorithmes élaborés au regard des temps d'accès, de distribution et de partage de charge entre les noeuds de contrôle pour la gestion des connaissances. A des fins de validation, cette plateforme a été utilisée dans deux exemples d'application: le Cloud computing et les smartgrids / The growth of the Internet, the emergence of new needs expressed by the advent of smart devices ( smartphones, touchpads , etc. ) and the development of new underlying applications induce many changes in the use of information technology in our everyday life and in all sectors. This new use that match new needs required to rethink the foundation of the network architecture itself, which has resulted in the emergence of new concepts based on a "use-centeric" view instead of a "network-centric" view. In fact, the control mechanisms of the transmission network must not only exploit the information on data, control and management planes, but also the knowledge acquired or learned by inductive or deductive inference on the current state of the network (traffic, resources, the rendering of the application, etc.) to accelerate decision making by the control elements of the network. This thesis is dealing with this latter aspect, which makes it consistent with work done on autonomic networks. It is about conceiving and implementing methods for the management, distribution and exploitation of knowledge necessary for the proper functioning of the transmission network. The knowledge plane that we implemented is based on both the idea of developing a management within an adaptive hierarchical structure where only some selected nodes are responsible for the dissemination of knowledge and the idea of linking these nodes through a spanning set of specialized networks to facilitate the exploitation of this knowledge. Compared to traditionally used platforms, the one developed in this thesis clearly shows the interest of the developed algorithms in terms of access time, distribution and load sharing between the control nodes for knowledge management. For validation purposes, our platform was tested on two application examples : Cloud computing and smart grids
460

A sociological study of the mobility of high school graduates of a small northeastern Kansas community 1935 to 1955

Taylor, Lloyd Andrew. January 1957 (has links)
Call number: LD2668 .T4 1957 T31 / Master of Science

Page generated in 0.1024 seconds