• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Scalable Scientific Computing Algorithms Using MapReduce

Xiang, Jingen January 2013 (has links)
Cloud computing systems, like MapReduce and Pregel, provide a scalable and fault tolerant environment for running computations at massive scale. However, these systems are designed primarily for data intensive computational tasks, while a large class of problems in scientific computing and business analytics are computationally intensive (i.e., they require a lot of CPU in addition to I/O). In this thesis, we investigate the use of cloud computing systems, in particular MapReduce, for computationally intensive problems, focusing on two classic problems that arise in scienti c computing and also in analytics: maximum clique and matrix inversion. The key contribution that enables us to e ectively use MapReduce to solve the maximum clique problem on dense graphs is a recursive partitioning method that partitions the graph into several subgraphs of similar size and running time complexity. After partitioning, the maximum cliques of the di erent partitions can be computed independently, and the computation is sped up using a branch and bound method. Our experiments show that our approach leads to good scalability, which is unachievable by other partitioning methods since they result in partitions of di erent sizes and hence lead to load imbalance. Our method is more scalable than an MPI algorithm, and is simpler and more fault tolerant. For the matrix inversion problem, we show that a recursive block LU decomposition allows us to e ectively compute in parallel both the lower triangular (L) and upper triangular (U) matrices using MapReduce. After computing the L and U matrices, their inverses are computed using MapReduce. The inverse of the original matrix, which is the product of the inverses of the L and U matrices, is also obtained using MapReduce. Our technique is the rst matrix inversion technique that uses MapReduce. We show experimentally that our technique has good scalability, and it is simpler and more fault tolerant than MPI implementations such as ScaLAPACK.
572

Predicting Purchase Timing, Brand Choice and Purchase Amount of Firm Adoption of Radically Innovative Information Technology: A Business to Business Empirical Analysis

Bohling, Timothy R 01 May 2012 (has links)
Knowing what to sell, when to sell, and to whom to sell is essential buyer behavior insight to allocate scarce marketing resources efficiently and effectively. Applying the theory of relationship marketing (Morgan and Hunt 1994), this study seeks to investigate the link between commitment and trust and firm adoption of radically innovative information technology (IT). The construct of radical innovation is operationalized through the use of cloud computing. A review of the vast scholarly literature on radical innovation diffusion and adoption, and modeling techniques used to analyze buyer behavior is followed by empirical estimation of each of the radical innovation adoption questions of purchase timing, brand choice, and purchase amount. Then, the inefficiencies in the independent model process are highlighted, suggesting the need for an integrated model. Next, an integrated model is developed to link the purchase timing, brand choice, and purchase amount decisions. The essay concludes with insight for marketing practitioners on the strength of the factors of commitment and trust on adoption of radical innovation, an improved methodology for the business-to-business marketing literature, and potential further research paths.
573

Enabling Technologies for Management of Distributed Computing Infrastructures

Espling, Daniel January 2013 (has links)
Computing infrastructures offer remote access to computing power that can be employed, e.g., to solve complex mathematical problems or to host computational services that need to be online and accessible at all times. From the perspective of the infrastructure provider, large amounts of distributed and often heterogeneous computer resources need to be united into a coherent platform that is then made accessible to and usable by potential users. Grid computing and cloud computing are two paradigms that can be used to form such unified computational infrastructures. Resources from several independent infrastructure providers can be joined to form large-scale decentralized infrastructures. The primary advantage of doing this is that it increases the scale of the available resources, making it possible to address more complex problems or to run a greater number of services on the infrastructures. In addition, there are advantages in terms of factors such as fault-tolerance and geographical dispersion. Such multi-domain infrastructures require sophisticated management processes to mitigate the complications of executing computations and services across resources from different administrative domains. This thesis contributes to the development of management processes for distributed infrastructures that are designed to support multi-domain environments. It describes investigations into how fundamental management processes such as scheduling and accounting are affected by the barriers imposed by multi-domain deployments, which include technical heterogeneity, decentralized and (domain-wise) self-centric decision making, and a lack of information on the state and availability of remote resources. Four enabling technologies or approaches are explored and developed within this work: (I) The use of explicit definitions of cloud service structure as inputs for placement and management processes to ensure that the resulting placements respect the internal relationships between different service components and any relevant constraints. (II) Technology for the runtime adaptation of Virtual Machines to enable the automatic adaptation of cloud service contexts in response to changes in their environment caused by, e.g., service migration across domains. (III) Systems for managing meta-data relating to resource usage in multi-domain grid computing and cloud computing infrastructures. (IV) A global fairshare prioritization mechanism that enables computational jobs to be consistently prioritized across a federation of several decentralized grid installations. Each of these technologies will facilitate the emergence of decentralized computational infrastructures capable of utilizing resources from diverse infrastructure providers in an automatic and seamless manner. / <p>Note that the author changed surname from Henriksson to Espling in 2011</p>
574

A Survey on Cloud Computing and Prospects for Information Visualization

Öztürk, Muhammed Hüseyin January 2010 (has links)
Today’s computing vision makes users to access services, applications via lightweight portable devices instead of powerful personal computers (PC). Since today’s applications and services need strong computing power and data storage, raising question will be “Who will provide these 2 attributes if users do not?” Cloud computing trend moves computing power and data storage from users’ side to application infrastructure side. The services that traditionally stored in users’ own computers will move into cloud computing platform and delivered by the Internet to its users. This new platform comes with its own benefits and design characteristics. Since all information data will move into another platform than individual computers, information visualization will be an opportunity field to analyze and maintain the cloud system structure as well as delivering abstract data into meaningful way to end users.
575

IT-Moln så långt ögat når? : En rapport om IT-stöd för kundarbetet i nystartade företag / IT clouds in sight? : A report on IT support in the client management work for Start up companies

Nordqvist, Anna January 2012 (has links)
Detta examensarbete behandlar ämnena molnteknik och IT-stöd för kundarbetet i nystartade företag och har främst utförts åt en extern beställare, företaget Approdites AB. Företaget önskade information och rekommendationer gällande potentiella tjänster för IT-stöd som de skulle kunna använda sig av i sitt arbeta med kunderna. Examensarbetet resulterar i dels en akademisk rapport bestående av bland annat teori, metod och tidigare forskning, dels en rapport/förundersökning innehållande rekommendationer om tjänster, samt riktlinjer åt företaget gällande de aktuella områdena. De båda rapporterna syftar till att förse företaget Approdites med relevant information om områdena i stort, samt ge förslag på tjänster som kan passa deras verksamhet. I den akademiska rapporten redogörs för hur grunden till rapporten med teori, metoder och andra viktiga områden lades.
576

Design and Implementation of a Service Discovery and Recommendation Architecture for SaaS Applications

Sukkar, Muhamed January 2010 (has links)
Increasing number of software vendors are offering or planning to offer their applications as a Software-as-a-Service (SaaS) to leverage the benefits of cloud computing and Internet-based delivery. Therefore, potential clients will face increasing number of providers that satisfy their requirements to choose from. Consequently, there is an increasing demand for automating such a time-consuming and error-prone task. In this work, we develop an architecture for automated service discovery and selection in cloud computing environment. The system is based on an algorithm that recommends service choices to users based on both functional and non-functional characteristics of available services. The system also derives automated ratings from monitoring results of past service invocations to objectively detect badly-behaving providers. We demonstrate the effectiveness of our approach using an early prototype that was developed following object-oriented methodology and implemented using various open-source Java technologies and frameworks. The prototype uses a Chord DHT as its distributed backing store to achieve scalability.
577

Secure Schemes for Semi-Trusted Environment

Tassanaviboon, Anuchart January 2011 (has links)
In recent years, two distributed system technologies have emerged: Peer-to-Peer (P2P) and cloud computing. For the former, the computers at the edge of networks share their resources, i.e., computing power, data, and network bandwidth, and obtain resources from other peers in the same community. Although this technology enables efficiency, scalability, and availability at low cost of ownership and maintenance, peers defined as ``like each other'' are not wholly controlled by one another or by the same authority. In addition, resources and functionality in P2P systems depend on peer contribution, i.e., storing, computing, routing, etc. These specific aspects raise security concerns and attacks that many researchers try to address. Most solutions proposed by researchers rely on public-key certificates from an external Certificate Authority (CA) or a centralized Public Key Infrastructure (PKI). However, both CA and PKI are contradictory to fully decentralized P2P systems that are self-organizing and infrastructureless. To avoid this contradiction, this thesis concerns the provisioning of public-key certificates in P2P communities, which is a crucial foundation for securing P2P functionalities and applications. We create a framework, named the Self-Organizing and Self-Healing CA group (SOHCG), that can provide certificates without a centralized Trusted Third Party (TTP). In our framework, a CA group is initialized in a Content Addressable Network (CAN) by trusted bootstrap nodes and then grows to a mature state by itself. Based on our group management policies and predefined parameters, the membership in a CA group is dynamic and has a uniform distribution over the P2P community; the size of a CA group is kept to a level that balances performance and acceptable security. The muticast group over an underlying CA group is constructed to reduce communication and computation overhead from collaboration among CA members. To maintain the quality of the CA group, the honest majority of members is maintained by a Byzantine agreement algorithm, and all shares are refreshed gradually and continuously. Our CA framework has been designed to meet all design goals, being self-organizing, self-healing, scalable, resilient, and efficient. A security analysis shows that the framework enables key registration and certificate issue with resistance to external attacks, i.e., node impersonation, man-in-the-middle (MITM), Sybil, and a specific form of DoS, as well as internal attacks, i.e., CA functionality interference and CA group subversion. Cloud computing is the most recent evolution of distributed systems that enable shared resources like P2P systems. Unlike P2P systems, cloud entities are asymmetric in roles like client-server models, i.e., end-users collaborate with Cloud Service Providers (CSPs) through Web interfaces or Web portals. Cloud computing is a combination of technologies, e.g., SOA services, virtualization, grid computing, clustering, P2P overlay networks, management automation, and the Internet, etc. With these technologies, cloud computing can deliver services with specific properties: on-demand self-service, broad network access, resource pooling, rapid elasticity, measured services. However, theses core technologies have their own intrinsic vulnerabilities, so they induce specific attacks to cloud computing. Furthermore, since public clouds are a form of outsourcing, the security of users' resources must rely on CSPs' administration. This situation raises two crucial security concerns for users: locking data into a single CSP and losing control of resources. Providing inter-operations between Application Service Providers (ASPs) and untrusted cloud storage is a countermeasure that can protect users from lock-in with a vendor and losing control of their data. To meet the above challenge, this thesis proposed a new authorization scheme, named OAuth and ABE based authorization (AAuth), that is built on the OAuth standard and leverages Ciphertext-Policy Attribute Based Encryption (CP-ABE) and ElGamal-like masks to construct ABE-based tokens. The ABE-tokens can facilitate a user-centric approach, end-to-end encryption and end-to-end authorization in semi-trusted clouds. With these facilities, owners can take control of their data resting in semi-untrusted clouds and safely use services from unknown ASPs. To this end, our scheme divides the attribute universe into two disjointed sets: confined attributes defined by owners to limit the lifetime and scope of tokens and descriptive attributes defined by authority(s) to certify the characteristic of ASPs. Security analysis shows that AAuth maintains the same security level as the original CP-ABE scheme and protects users from exposing their credentials to ASP, as OAuth does. Moreover, AAuth can resist both external and internal attacks, including untrusted cloud storage. Since most cryptographic functions are delegated from owners to CSPs, AAuth gains computing power from clouds. In our extensive simulation, AAuth's greater overhead was balanced by greater security than OAuth's. Furthermore, our scheme works seamlessly with storage providers by retaining the providers' APIs in the usual way.
578

Timed-Release Proxy Conditional Re-Encryption for Cloud Computing

Chen, Jun-Cheng 30 August 2011 (has links)
The mobile technology is being developed very fast and it is a general situation where people can fetch or edit files via the Internet by mobile devices such as notebooks, smart phones, and so on. Due to possible possession of various devices of a user, it may be inconvenient for him to synchronize a file such that he cannot edit the same file via his devices easily. Recently, the cloud technology is becoming more and more popular and there are some new business models launched. One of them is a storage platform Dropbox which can synchronize users' files in their own devices and also allow users to share their files to others. However, Dropbox was indicated that the privacy of the files has not been protected well. Many encryption schemes have been proposed in the literature, but most of them do not support the property of secret file sharing when deploying them in cloud environment. Even though some schemes support the property, they can only provide a file owner to share all of his files with others. In some situations, the file owner may want to ensure that the receiver cannot decrypt the ciphertext until a specified time arrives. The existing encryption schemes cannot achieve these goals simultaneously. Hence, in order to cope with these problems, we propose a timed-release proxy conditional re-encryption scheme for cloud computing. Not only are users¡¦ files stored safely but also each user can freely share a desired file with another user. Furthermore, the receiver cannot obtain any information of the file until the chosen time arrives. Finally, we also demonstrate the security of our proposed scheme via formal proofs.
579

Dual Migration for Cloud Service

Chen, Ya-Yin 12 July 2012 (has links)
none
580

User¡¦s Risk Management for the Personal Data of the Cloud Computing Service Industires

Huang, Yen-Lin 06 August 2012 (has links)
With the rapid development of Information Technology, ¡§Cloud Computing¡¨ is becoming increasingly popular in the industry as it is accessible to various data processing services just by connecting to third-party cloud service providers via network. A new global technological trend has thus been ushered as a result of powerful processing, elastic usage and low cost of the cloud computing. Although ¡§Cloud Computing¡¨ provides a cloud which is more large-scaled, relevant and beneficial, most practical cloud patrons are aware that what matters is its corresponding security. Any who has ever used the Internet, whether an enterprise or an individual, will inevitably run the risks of information recorded, copied, leaked, deleted inappropriately or accidently or even used for inappropriate purposes by third-parties. The private data and business secrets of the stakeholders of an enterprise, including its customers, partners, employees or suppliers, will also suffer from the information vulnerability. Therefore, as for the cloud computing industry, what matters for the government, enterprise or individual is to provide an information security shelter rather than a network environment in which the personal data is highly exposed. Cavoukin (2010) argues that the issue of information security related to the cloud computing is one of issues in the public domain. The data generated from the digital cloud computing management and the people involved are so large that each citizen is drawn to be concerned with the government policies and laws (Lee, 2010). In this paper, we make a risk management for the cloud computing and discuss the risk management mechanisms for the cloud computing industry with the Freeman¡¦s stakeholder theory.

Page generated in 0.0693 seconds