Spelling suggestions: "subject:"scalability."" "subject:"calability.""
281 |
Scalable and Highly Available Database Systems in the CloudMinhas, Umar Farooq January 2013 (has links)
Cloud computing allows users to tap into a massive pool of shared computing
resources such as servers, storage, and network. These resources are provided as a
service to the users allowing them to “plug into the cloud” similar to a utility grid.
The promise of the cloud is to free users from the tedious and often complex task of
managing and provisioning computing resources to run applications. At the same
time, the cloud brings several additional benefits including: a pay-as-you-go cost
model, easier deployment of applications, elastic scalability, high availability, and
a more robust and secure infrastructure.
One important class of applications that users are increasingly deploying in
the cloud is database management systems. Database management systems differ
from other types of applications in that they manage large amounts of state that
is frequently updated, and that must be kept consistent at all scales and in the
presence of failure. This makes it difficult to provide scalability and high availability
for database systems in the cloud. In this thesis, we show how we can exploit
cloud technologies and relational database systems to provide a highly available
and scalable database service in the cloud.
The first part of the thesis presents RemusDB, a reliable, cost-effective high
availability solution that is implemented as a service provided by the virtualization
platform. RemusDB can make any database system highly available with little or
no code modifications by exploiting the capabilities of virtualization. In the second
part of the thesis, we present two systems that aim to provide elastic scalability
for database systems in the cloud using two very different approaches. The three
systems presented in this thesis bring us closer to the goal of building a scalable
and reliable transactional database service in the cloud.
|
282 |
Virtual World: Observe, Interact & Simulate / Virtual World: Observe, Interact and SimulatePhor, Pallavi 10 July 2007 (has links)
This thesis researches the potential for using Virtual Worlds as an advanced environment for interaction and simulation besides observation. Tables, matrices and scenarios have been developed to illustrate upfront, the route that can be taken to develop an advanced virtual environment. The paper attempts to build a dialogue for designers, to gauge the requirements of a client and thereby propose a schedule of deliverable, time and cost, in a pre-project phase.
|
283 |
Optimization and Verification of an Integrated DSPSvensson, Markus, Österholm, Thomas January 2008 (has links)
<p>There is a lot of applications for DSPs (Digital Signal Processor) in the most rapidly growing areas in the industry right now as wireless communication along with audio and video products are getting more and more popular. In this report, a DSP, developed at the division of Computer Engineering at the University of Linköping, is optimized and verified.</p><p>Register Forwarding was implemented on a general architecture level to avoiddata hazards that may arise when implementing instruction pipelining in a processor.</p><p>The very common FFT algorithm is also optimized but on instruction setlevel. That means the algorithm is carefully analyzed to find operations that mayexecute in parallel and then create new instructions for these parallel operations.The optimization is concentrated on the butterfly operation as it is such a majorpart of the FFT computation. Comparing the accelerated butterfly with the unaccelerated gives an improvement of 30% in terms of clock cycles needed for thecomputation.</p><p>In the report there are also some discussions about the benefits and drawbacksof changing from a hardware to a software stack, mostly in terms of interrupts andthe return instruction.</p><p>Another important property of the processor is scalability. That is, it is possibleto attach extra peripherals to the core, which accelerates certain tasks. Aninterface towards these peripherals is developed along with two template designsthat may be used to develop other peripherals.</p><p>After all these modifications, a new test bench is developed to verify the functionality.</p>
|
284 |
Large-scale network analyticsSong, Han Hee, 1978- 05 October 2012 (has links)
Scalable and accurate analysis of networks is essential to a wide variety of existing and emerging network systems. Specifically, network measurement and analysis helps to understand networks, improve existing services, and enable new data-mining applications. To support various services and applications in large-scale networks, network analytics must address the following challenges: (i) how to conduct scalable analysis in networks with a large number of nodes and links, (ii) how to flexibly accommodate various objectives from different administrative tasks, (iii) and how to cope with the dynamic changes in the networks. This dissertation presents novel path analysis schemes that effectively address the above challenges in analyzing pair-wise relationships among networked entities. In doing so, we make the following three major contributions to large-scale IP networks, social networks, and application service networks. For IP networks, we propose an accurate and flexible framework for path property monitoring. Analyzing the performance side of paths between pairs of nodes, our framework incorporates approaches that perform exact reconstruction of path properties as well as approximate reconstruction. Our framework is highly scalable to design measurement experiments that span thousands of routers and end hosts. It is also flexible to accommodate a variety of design requirements. For social networks, we present scalable and accurate graph embedding schemes. Aimed at analyzing the pair-wise relationships of social network users, we present three dimensionality reduction schemes leveraging matrix factorization, count-min sketch, and graph clustering paired with spectral graph embedding. As concrete applications showing the practical value of our schemes, we apply them to the important social analysis tasks of proximity estimation, missing link inference, and link prediction. The results clearly demonstrate the accuracy, scalability, and flexibility of our schemes for analyzing social networks with millions of nodes and tens of millions of links. For application service networks, we provide a proactive service quality assessment scheme. Analyzing the relationship between the satisfaction level of subscribers of an IPTV service and network performance indicators, our proposed scheme proactively (i.e., detect issues before IPTV subscribers complain) assesses user-perceived service quality using performance metrics collected from the network. From our evaluation using network data collected from a commercial IPTV service provider, we show that our scheme is able to predict 60% of the service problems that are complained by customers with only 0.1% of false positives. / text
|
285 |
Deconstructing the "Power and Control Motive": Developing and Assessing the Measurability of Internal PowerWagers, Shelly Marie 01 January 2012 (has links)
Despite the increased social recognition, law and policy changes within the criminal justice system, and the widespread use of court mandated batterer intervention programs (BIPs) domestic violence continues to be a persistent problem. The lack of significant decline in incidence rates along with a growing body of empirical evidence that indicates BIPs are, at best, only moderately effective raises serious concern. Effective policies and programs are based upon empirically tested theory. The assertion "the batterer's motive is power and control" has become fundamental to almost all of our currently used and accepted mainstream theoretical explanations regarding domestic violence. However, the domestic violence literature has not yet advanced any specific conceptualizations of power as a construct, it has not produced a theoretical model of power that articulates why or how power specifically acts as a motive for a batterer, and it has never empirically tested this fundamental assertion.
The purpose of this research is to address this gap by focusing on the role of power in domestic violence theory and offer a more complete conceptualization and precise operationalization of power. The main goal of this study was to advance our current understanding of an individual's sense of power and control as a motive for using coercive control tactics, such as psychological and physical abuse tactics against an intimate partner. Therefore, the primary objective of this study was to develop and assess the measurability of the construct "internal power". Specifically, it defined, conceptualized, and operationalized internal power. Then a Pearson's product-moment correlation coefficient was examined and a principal components factor analysis was conducted to investigate the dimensionality and underlying factor structure of internal power. Findings indicated empirical support for the proposed measure of internal power, allowing its relationship to an individual's use of psychological and physical abuse tactics to be empirically assessed. Results of a t-test and examination of a Pearson's product-moment correlation coefficient indicated that internal power is inversely related to an individual's use of psychological and physical abuse tactics. Findings indicate that both the measure for internal power and its potential relationship to an individual's use of psychological and physical abuse tactics warrants further exploration and development.
|
286 |
Scalable Trajectory Approach for ensuring deterministic guarantees in large networksMedlej, Sara, Medlej, Sara 26 September 2013 (has links) (PDF)
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method's scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
|
287 |
Middleware pour l'Internet des Objets IntelligentsHachem, Sara 10 February 2014 (has links) (PDF)
L'Internet of Things (IoT) est caractérisé par l'introduction, auprès des utilisateurs, d'un nombre grandissant d'objets (ou things) capables d'acquérir des données depuis leur environnement et d'agir sur celui-ci, et dotés de capacités de calcul et de communication sophistiquées. Une grande partie de ces objets ont pour avantage d'être mobiles, mais cette particularitéprovoque aussi l'émergence de problèmes nouveaux. Les plus critiques d'entre eux découlent directement de l'Internet actuel, sous une forme amplifiée, et portent sur la gestion du grand nombre d'utilisateurs et d'objets connectés, l'interopérabilité entre des objets aux technologies hétérogènes et les changements d'environnement dus à la mobilité d'un très grand nombre d'objets. Cette thèse se propose d'étudier et de résoudre les problèmes susmentionnés en adaptant l'Architecture Orientée Service (SOA) pour que les capteurs et les actionneurs intégrés aux objets puissent être présentés comme des services et, de fait, réduire le couplage entre ces services et leurs hôtes de façon à abstraire leur nature hétérogène. Toutefois, en dépit de ses avantages, SOA n'a pas été conçue pour gérer une aussi grande échelle que celle de l'IoT mobile. En conséquence, la contribution principale de cette thèse porte sur la conception d'une Thing-based Service-Oriented Architecture repensant les fonctionnalités de SOA, et tout particulièrement les mécanismes de découverte et de composition de services. Cette nouvelle architecture a été mise en oeuvre au sein de MobIoT, un middleware spécifiquement conçu pour gérer et contrôler le très grand nombre d'objets mobiles impliqués dans les opérations propres à l'IoT. Dans le but d'évaluer cette nouvelle architecture, nous avons implémenté un prototype et analysé ses performances au travers de nombreuses expériences qui démontrent que les solutions que nous proposons sont viables et pertinentes, notamment en ce qui concerne le passage à l'échelle.
|
288 |
New Techniques for Building Timing-Predictable Embedded SystemsGuan, Nan January 2013 (has links)
Embedded systems are becoming ubiquitous in our daily life. Due to close interaction with physical world, embedded systems are typically subject to timing constraints. At design time, it must be ensured that the run-time behaviors of such systems satisfy the pre-specified timing constraints under any circumstance. In this thesis, we develop techniques to address the timing analysis problems brought by the increasing complexity of underlying hardware and software on different levels of abstraction in embedded systems design. On the program level, we develop quantitative analysis techniques to predict the cache hit/miss behaviors for tight WCET estimation, and study two commonly used replacement policies, MRU and FIFO, which cannot be analyzed adequately using the state-of-the-art qualitative cache analysis method. Our quantitative approach greatly improves the precision of WCET estimation and discloses interesting predictability properties of these replacement policies, which are concealed in the qualitative analysis framework. On the component level, we address the challenges raised by multi-core computing. Several fundamental problems in multiprocessor scheduling are investigated. In global scheduling, we propose an analysis method to rule out a great part of impossible system behaviors for better analysis precision, and establish conditions to guarantee the bounded responsiveness of computing tasks. In partitioned scheduling, we close a long standing open problem to generalize the famous Liu and Layland's utilization bound in uniprocessor real-time scheduling to multiprocessor systems. We also propose to use cache partitioning for multi-core systems to avoid contentions on shared caches, and solve the underlying schedulability analysis problem. On the system level, we present techniques to improve the Real-Time Calculus (RTC) analysis framework in both efficiency and precision. First, we have developed Finitary Real-Time Calculus to solve the scalability problem of the original RTC due to period explosion. The key idea is to only maintain and operate on a limited prefix of each curve that is relevant to the final results during the whole analysis procedure. We further improve the analysis precision of EDF components in RTC, by precisely bounding the response time of each computation request.
|
289 |
An aggregative approach for scalable detection of DoS attacksHamidi, Alireza 22 August 2008 (has links)
If not the most, one of the serious threats to data networks, particularly pervasive
commercial networks such as Voice-over-IP (VoIP) providers is Denial-of-Service (DoS) attack. Currently, majority of solutions for these attacks focus on observing detailed server state changes due to any or some of the incoming messages. This approach however requires significant amount of server’s memory and processing time.
This results in detectors not being able to scale up to the network edge points that
receive millions of connections (requests) per second. To solve this problem, it is
desirable to design stateless detection mechanisms. One approach is to aggregate
transactions into groups. This research focuses on stateless scalable DoS intrusion
detection mechanisms to obviate keeping detailed state for connections while maintaining acceptable efficiency. To this end, we adopt a two-layer aggregation scheme
termed Advanced Partial Completion Filters (APCF), an intrusion detection model that defends against DoS attacks without tracking state information of each individual connection. Analytical as well as simulation analysis is performed on the proposed APCF. A simulation test bed has been implemented in OMNET++ and through simulations it is observed that APCF gained notable detection rates in terms of false positive and true positive detections, as opposed to its predecessor PCF. Although further study is needed to relate APCF adjustments to a certain network situation, this research shows invaluable gain to mitigate intrusion detection from not so scalable state-full mechanisms to aggregate scalable approach.
|
290 |
An underwater safety-critical mobile communication systemWong, Jennifer 15 May 2009 (has links)
Recreational scuba diving is a highly social activity where divers are encouraged to work in groups of two or more people. Though collaborative, divers are unable to freely and naturally communicate. Additionally, the distortion of sensory information (e.g. distances and sounds cannot be judged as accurately underwater) affects the ability to keep track of critical information which impairs their ability to engage in this underwater world. We have studied and designed a fault tolerant system, including the software, the device, and the network, to foster underwater communication. We studied the technology required, the software design for both single user and multiple users, as well as, the network design in order to support such a system. In the thesis, we have set up and analyzed the result of three user studies and a simulation to investigate the viability of the proposed design.
|
Page generated in 0.0641 seconds