• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 27
  • 19
  • 12
  • 8
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 391
  • 135
  • 79
  • 63
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 40
  • 35
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Scalable Job Startup and Inter-Node Communication in Multi-Core InfiniBand Clusters

Sridhar, Jaidev Krishna 02 September 2009 (has links)
No description available.
82

FEASIBILITY STUDIES OF STATISTIC MULTIPLEXED COMPUTING

Celik, Yasin January 2018 (has links)
In 2012, when Professor Shi introduced me to the concept of Statistic Multiplexed Computing (SMC), I was skeptical. It contradicted everything I have learned and heard about distributed and parallel computing. However, I did believe that unhandled failures in any application will negatively impact its scalability. For that, I agreed to take on the feasibility study of SMC for practical applications. After six+ years research and experimentations, it became clear to me that the most widely believed misconception is “either performance or reliability” when upscaling a distributed application. This conception was the result of the direct use of hop-by-hop communication protocols in distributed application construction. Terminology: Hop-by-hop data protocol is a two-sided reliable lossless data communication protocol for transmitting data between a sender and a receiver. Either the sender or the receiver crash will cause data losses. Examples: MPI, RPC, RMI, OpenMP. End-to-end data protocol is a single-sided reliable lossless data communication protocol for transmitting data between application programs. All runtime available processors, networks and storage will be automatically dispatched to the best effort support of the reliable communication regardless transient and permanent device failures. Examples: HDFS, Blockchain, Fabric and SMC. Active end-to-end data protocol is a single-sided reliable lossless data communication pro- tocol for transmitting data and automatically synchronizing application programs. Example: SMC (AnkaCom, AnkaStore (this dissertation)). Unlike the hop-by-hop protocols, the use of end-to-end protocol forms an application- dependent overlay network. An overlay network for distributed and parallel computing application, such as Blockchain, has been proven to defy the “common wisdom” for two important distributed computing challenges: a) Extreme scale computing without single-point failures is practically feasible. Thus, all transaction or data losses can be eliminated. b) Extreme scale synchronized transaction replication is practically feasible. Thus, the CAP conjecture and theorem become irrelevant. Unlike passive overlay networks, such as the HDFS and Blockchain, this dissertation study proves that an active overlay network can deliver higher performance, higher reliability and security at the same time as the application up scales. Although application-level security is not part of this dissertation, it is easy to see that application-level end-to-end protocols will fundamentally eliminate the “man-in-the-middle” attacks. This will nullify many well-known attacks. With the zero-single-point failure and zero impact synchronous replication features, SMC applications are naturally resistant to DDoS and ransomware attacks. This dissertation explores practical implementations of the SMC concept for compute intensive (CI) and data intensive (DI) applications. This defense will disclose the details of CI and DI runtime implementations and results of inductive computational experiments. The computational environments include the NSF Chameleon bare-metal HPC cloud and Temple’s TCloud cluster. / Computer and Information Science
83

Partitioning Techniques for Reducing Computational Effort of Routing in Large Networks.

Woodward, Mike E., Al-Fawaz, M.M. January 2004 (has links)
No / A new scheme is presented for partitioning a network having a specific number of nodes and degree of connectivity such that the number of operations required to find a constrained path between a source node and destination node, averaged over all source-destination pairs, is minimised. The scheme can speed up the routing function, possibly by orders of magnitude under favourable conditions, at the cost of a sub-optimal solution.
84

A suitable server placement for peer-to-peer live streaming

Yuan, X.Q., Yin, H., Min, Geyong, Liu, X., Hui, W., Zhu, G.X. January 2013 (has links)
No / With the rapid growth of the scale, complexity, and heterogeneity of Peer-to-Peer (P2P) systems, it has become a great challenge to deal with the peer's network-oblivious traffic and self-organization problems. A potential solution is to deploy servers in appropriate locations. However, due to the unique features and requirements of P2P systems, the traditional placement models cannot yield the desirable service performance. To fill this gap, we propose an efficient server placement model for P2P live streaming systems. Compared to the existing solutions, this model takes the Internet Service Provider (ISP) friendly problem into account and can reduce the cross-network traffic among ISPs. Specifically, we introduce the peers' contribution into the proposed model, which makes it more suitable for P2P live streaming systems. Moreover, we deploy servers based on the theoretical solution subject to practical data and apply them to practical live streaming applications. The experimental results show that this new model can reduce the amount of cross-network traffic and improve the system efficiency, has a better adaptability to Internet environment, and is more suitable for P2P systems than the traditional placement models.
85

Performance Evaluation of Cassandra Scalability on Amazon EC2

Srinadhuni, Siddhartha January 2018 (has links)
Context In the fields of communication systems and computer science, Infrastructure as a Service consists of building blocks for cloud computing and to provide robust network features. AWS is one such infrastructure as a service which provides several services out of which Elastic Cloud Compute (EC2) is used to deploy virtual machines across several data centers and provides fault tolerant storage for applications across the cloud. Apache Cassandra is one of the many NoSQL databases which provides fault tolerance and elasticity across the servers. It has a ring structure which helps the communication effective between the nodes in a cluster. Cassandra is robust which means that there will not be a down-time when adding new Cassandra nodes to the existing cluster.  Objectives. In this study quantifying the latency in adding Cassandra nodes to the Amazon EC2 instances and assessing the impact of Replication factors (RF) and Consistency Levels (CL) on autoscaling have been put forth. Methods. Primarily a literature review is conducted on how the experiment with the above-mentioned constraints can be carried out. Further an experimentation is conducted to address the latency and the effects of autoscaling. A 3-node Cassandra cluster runs on Amazon EC2 with Ubuntu 14.04 LTS as the operating system. A threshold value is identified for each Cassandra specific configuration and is scaled over to five nodes on AWS utilizing the benchmarking tool, Cassandra stress tool. This procedure is repeated for a 5-node Cassandra cluster and each of the configurations with a mixed workload of equal reads and writes. Results. Latency has been identified in adding Cassandra nodes on Amazon EC2 instances and the impacts of replication factors and consistency levels on autoscaling have been quantified. Conclusions. It is concluded that there is a decrease in latency after autoscaling for all the configurations of Cassandra and changing the replication factors and consistency levels have also resulted in performance change of Cassandra.
86

Scalability in Startups : A Case Study of How Technological Startup Companies Can Enhance Scalability

Jenefeldt, Andreas, Foogel Jakobsson, Erik January 2020 (has links)
Startups and new businesses are important for the development of technology, the economy, and the society as a whole. To be able to contribute towards this cause, startups need to be able to survive and grow. It is therefore important for startups to understand how they can scale up their business. This led to the purpose of this study: to determine success factors for technological startup companies to increase their scalability. Five areas were identified to have an impact on the scalability of a business, namely; partnerships, cloud computing, modularity, process automation and business model scalability. Within these areas, several subareas were found, which were certain areas of interest within the theory. Together, these subareas helped answer how companies can work with scalability in each area. These areas, and their subareas, went into an analytical model that formed the structure of the empirical and analytical parts of the study. The study is a multicase study, consisting of 15 B2B companies, of varying size and maturity, whom all offered software as a part of their solutions. The study concludes that there are six important factors for succeeding with scalability. An important factor to succeed with scalability is to adopt partnerships, since this will allow for outsourcing, and give access to resources, markets and customers. It is also concluded that cloud computing is a very scalable delivery method, but that it requires certain success factors, such as working with partners, having a customer focus, having the right knowledge internally, and having a standardized product. Further, modularity can enable companies to meet differing customer needs since it increases flexibility, can expand the offer, and make sales easier. The study concludes that process automation increases the efficiency in the company, and can be done through automating a number of processes. Focusing both internally and externally is another important factor for success, by allowing companies to develop a scalable product that is demanded by customers. Lastly, a scalable business model is found to be the final objective, and that it is important to work with the other areas to get there, something that includes trial and error to find what works best for each individual company. The six important factors formed the basis for the recommendations. The study recommend that startups should utilize partnerships and process automation. Startups should also be aware of, and work with, the success factors of cloud computing, use modularity when selling to markets with different customer needs, automate other processes before automating sales, keep customer focus when developing the product, and work actively to become more scalable.
87

Mikrotjänst-arkitektur och dess skalbarhet / The Scalability of Microservice Architecture

Larsson, Mattias January 2018 (has links)
Att designa mjukvaruapplikationer med en viss struktur kan ofta framhäva efterfrågade egenskaper. För att välja rätt arkitektur behövs ofta övervägningar, och ibland till och med kompromisser, göras om applikationens planerade karaktär. Det är ofta bra att i detta stadie ha en klar bild om vilka attribut en applikation önskas ha. Ett av de viktigare attributen på sikt är skalbarhet. Kunskapen om olika arkitekturers skalbarhet spelar en definitiv roll i designfasen, vilket avgör hur en applikation senare skalas. På senare år har mikrotjänst-arkitektur blivit ett populärt sätt att bygga mjukvara på där den höga skalbarheten sägs vara en bidragande faktor. Detta arbete har till syfte att undersöka skalbarheten hos mikrotjänst-arkitektur i förhållande till monolitisk arkitektur och visa hur detta kvalitetsattribut påverkas när en transformering från en monolit till en mikrotjänst-arkitektur görs. Arbetet har valt att utgå ifrån en existerande modul i en E-handelsplattform med öppen källkod. Modulen som transformerades till en mikrotjänst, skalades horisontellt för respektive arkitektur och applikations-version. Vid användandet av lämpliga verktyg, såsom Docker, visar resultatet att horisontell skalbarhet finns i högre grad hos mikrotjänst-arkitekturen och fortsätter därefter vara hög. Skalning av mikrotjänster kan göras med en högre precision av det som önskas förändras. Detta står i kontrast till den monolitiska strukturen, där skalning begränsas av prestandan av den miljö där mjukvaruapplikationen körs. Efter transformationen till en mikrotjänst-arkitektur ökade skalbarheten, då skalningsmetoden gjordes med mer finkornighet och isolering av den utvalda modulen. För att individuellt skala den monolitiska modulen horisontellt behövdes förändringen göras virtuellt med hjälp av bakgrundsprocesser. Denna lösning visar sig vara en indirekt skalning av hela den monolitiska strukturen. Utöver horisontell skalbarhet fokuserar utvärderingen av resultatet på kvalitativa attribut i form av simplicitet, autonomi och modularitet. / In designing software applications, a chosen structure can often accentuate desired properties. To choose the correct architecture, one must often do considerations and sometimes even compromises, about the intended characteristics of the application. In that stage it is often well motivated to have a clear picture about which attributes the application shall possess. Over time, one of the most important attributes is scalability. The knowledge about the scalability of different architectures could play a crucial part in the design phase, determining how an application is scaled in the future. In recent years Microservice Architecture has been a popular way of building software and its high scalability is said to be a contributing factor. This work has the purpose of examine the scalability of microservice architecture relative to the monolithic architecture and how this quality attribute is affected after a transformation is done from a monolith to a microservice system. This work is based on an existing module from an open source E-commerce platform. The module was first transformed into a working microservice, then both architectures was horizontally scaled. Using suitable tools such as Docker, the result of this work shows that horizontal scalability exists in a high degree within the microservice architecture and continues being high there after. Scaling of microservices can be done with higher precision of what are to be changed. This stands in relation to the monolithic approach where scaling is limited to the performance of the environment where the software application is running. The transformation to a microservice architecture resulted in an increase of scalability. The scaling method was more fined-grained and isolated to the selected module. In contrast, individual horizontal scaling of the monolithic module was required to be done virtually with background processes. This concluded in an indirect scaling of the whole structure of the monolith. Besides horizontal scalability, the evaluation is focused on the system quality attributes of simplicity, autonomy and modularity.
88

Performance scalability of n-tier application in virtualized cloud environments: Two case studies in vertical and horizontal scaling

Park, Junhee 27 May 2016 (has links)
The prevalence of multi-core processors with recent advancement in virtualization technologies has enabled horizontal and vertical scaling within a physical node achieving economical sharing of computing infrastructures as computing clouds. Through hardware virtualization, consolidated servers each with specific number of core allotment run on the same physical node in dedicated Virtual Machines (VMs) to increase overall node utilization which increases profit by reducing operational costs. Unfortunately, despite the conceptual simplicity of vertical and horizontal scaling in virtualized cloud environments, leveraging the full potential of this technology has presented significant scalability challenges in practice. One of the fundamental problems is the performance unpredictability in virtualized cloud environments (ranked fifth in the top 10 obstacles for growth of cloud computing). In this dissertation, we present two case studies in vertical and horizontal scaling to this challenging problem. For the first case study, we describe concrete experimental evidence that shows important source of performance variations: mapping of virtual CPU to physical cores. We then conduct an experimental comparative study of three major hypervisors (i.e., VMware, KVM, Xen) with regard to their support of n-tier applications running on multi-core processor. For the second case study, we present empirical study that shows memory thrashing caused by interference among consolidated VMs is a significant source of performance interference that hampers horizontal scalability of an n-tier application performance. We then execute transient event analyses of fine-grained experiment data that link very short bottlenecks with memory thrashing to the very long response time (VLRT) requests. Furthermore we provide three practical techniques such as VM migration, memory reallocation, soft resource allocation and show that they can mitigate the effects of performance interference among consolidate VMs.
89

The User Attribution Problem and the Challenge of Persistent Surveillance of User Activity in Complex Networks

Taglienti, Claudio 01 January 2014 (has links)
In the context of telecommunication networks, the user attribution problem refers to the challenge faced in recognizing communication traffic as belonging to a given user when information needed to identify the user is missing. This is analogous to trying to recognize a nameless face in a crowd. This problem worsens as users move across many mobile networks (complex networks) owned and operated by different providers. The traditional approach of using the source IP address, which indicates where a packet comes from, does not work when used to identify mobile users. Recent efforts to address this problem by exclusively relying on web browsing behavior to identify users were limited to a small number of users (28 and 100 users). This was due to the inability of solutions to link up multiple user sessions together when they rely exclusively on the web sites visited by the user. This study has tackled this problem by utilizing behavior based identification while accounting for time and the sequential order of web visits by a user. Hierarchical Temporal Memories (HTM) were used to classify historical navigational patterns for different users. Each layer of an HTM contains variable order Markov chains of connected nodes which represent clusters of web sites visited in time order by the user (user sessions). HTM layers enable inference "generalization" by linking Markov chains within and across layers and thus allow matching longer sequences of visited web sites (multiple user sessions). This approach enables linking multiple user sessions together without the need for a tracking identifier such as the source IP address. Results are promising. HTMs can provide high levels of accuracy using synthetic data with 99% recall accuracy for up to 500 users and good levels of recall accuracy of 95 % and 87% for 5 and 10 users respectively when using cellular network data. This research confirmed that the presence of long tail web sites (rarely visited) among many repeated destinations can create unique differentiation. What was not anticipated prior to this research was the very high degree of repetitiveness of some web destinations found in real network data.
90

A comparative study of cloud computing environments and the development of a framework for the automatic deployment of scaleable cloud based applications

Mlawanda, Joyce 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: Modern-day online applications are required to deal with an ever-increasing number of users without decreasing in performance. This implies that the applications should be scalable. Applications hosted on static servers are in exible in terms of scalability. Cloud computing is an alternative to the traditional paradigm of static application hosting and o ers an illusion of in nite compute and storage resources. It is a way of computing whereby computing resources are provided by a large pool of virtualised servers hosted on the Internet. By virtually removing scalability, infrastructure and installation constraints, cloud computing provides a very attractive platform for hosting online applications. This thesis compares the cloud computing infrastructures Google App Engine and AmazonWeb Services for hosting web applications and assesses their scalability performance compared to traditionally hosted servers. After the comparison of the three application hosting solutions, a proof-of-concept software framework for the provisioning and deployment of automatically scaling applications is built on Amazon Web Services which is shown to be best suited for the development of such a framework.

Page generated in 0.0644 seconds