Awad, Ashraf A.
01 December 2003
No description available.
Ho Wang Hei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 93-94). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Contributions --- p.1 / Chapter 1.1.1 --- Scalability of Network Capacity with Power Control --- p.1 / Chapter 1.1.2 --- Trade-off between network capacity and fairness with Power Control --- p.3 / Chapter 1.2 --- Related Work --- p.4 / Chapter 1.3 --- Organization of the Thesis --- p.6 / Chapter Chapter 2 --- Background --- p.8 / Chapter 2.1 --- Hidden- and Exposed-node Problems --- p.8 / Chapter 2.1.1 --- HN-free Design (HFD) --- p.9 / Chapter 2.1.2 --- Non-Scalable Capacity in 802.11 caused by EN --- p.11 / Chapter 2.2 --- Shortcomings of Minimum-Transmit-Power Approach --- p.13 / Chapter Chapter 3 --- Simultaneous Transmissions Constraints with Power Control --- p.15 / Chapter 3.1 --- Physical-Collision Constraints --- p.16 / Chapter 3.1.1 --- Protocol-Independent Physical-Collision Constraints --- p.17 / Chapter 3.1.2 --- Protocol-Specific Physical-Collision Constraints --- p.17 / Chapter 3.2 --- Protocol-Collision-Prevention Constraints --- p.18 / Chapter 3.2.1 --- Transmitter-Side Carrier-Sensing Constraints --- p.18 / Chapter 3.2.2 --- Receiver-Side Carrier-Sensing Constraints --- p.19 / Chapter Chapter 4 --- Graph Models for Capturing Transmission Constraints and Hidden-node Problems --- p.20 / Chapter 4.1 --- Link-Interference Graph from Physical-Collision Constraints --- p.21 / Chapter 4.2 --- Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.3 --- Ideal Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.4 --- Definition of HN and EN and their Investigation using Graph Model --- p.23 / Chapter 4.5 --- Attacking Cases --- p.26 / Chapter Chapter 5 --- Scalability of Network Capacity with Adaptive Power Control --- p.27 / Chapter 5.1 --- Selective Disregard of NAVs (SDN) --- p.27 / Chapter 5.2 --- Scalability of Network Capacity: Analytical Discussion --- p.29 / Chapter 5.3 --- Adaptive Power Control for SDN --- p.31 / Chapter 5.3.1 --- Per-iteration Power Adjustment --- p.32 / Chapter 5.3.2 --- Power Control Scheduling Strategy --- p.35 / Chapter 5.3.3 --- Power Exchange Algorithm --- p.39 / Chapter 5.3.4 --- Comparison of Scheduling Strategies --- p.41 / Chapter 5.4 --- Scalability of Network Capacity: Numerical Results --- p.43 / Chapter Chapter 6 --- Decoupled Adaptive Power Control (DAPC) --- p.45 / Chapter 6.1 --- Per-iteration Power Adjustment --- p.45 / Chapter 6.2 --- Power Exchange Algorithm --- p.47 / Chapter 6.3 --- Implementation of DAPC --- p.48 / Chapter 6.4 --- Deadlock Problem in DAPC --- p.50 / Chapter Chapter 7 --- Progressive-Uniformly-Scaled Power Control (PUSPC): Deadlock-free Design --- p.53 / Chapter 7.1 --- Algorithm of PUSPC --- p.53 / Chapter 7.2 --- Deadlock-free property of PUSPC --- p.60 / Chapter 7.3 --- Deadlock Resolution of DAPC using PUSPC --- p.62 / Chapter Chapter 8 --- Incremental Power Adaptation --- p.65 / Chapter 8.1 --- Incremental Power Adaptation (IPA) --- p.65 / Chapter 8.2 --- Maximum Allowable Power in EPA --- p.68 / Chapter 8.3 --- Numerical Results of IPA --- p.71 / Chapter Chapter 9 --- Numerical Results and the Trade-off between EN and HN --- p.78 / Chapter Chapter 10 --- Conclusion --- p.83 / Appendix I: Proof of the Correct Operation of PE Algorithm for APC for SDN --- p.86 / Appendix II: Proof of the Correct Operation of PE Algorithm for DAPC --- p.89 / Appendix III: Scalability of the Communication Cost of PE Algorithm --- p.91 / Bibliography --- p.93
MacInnis, Robert F.
This thesis presents a scalable service-oriented architecture for the demand-driven deployment of location-neutral software services, using an end-to-end or ‘holistic’ approach to address identified shortcomings of the traditional Web Services model. The architecture presents a multi-endpoint Web Service environment which abstracts over Web Service location and technology and enables the dynamic provision of highly-available Web Services. The model describes mechanisms which provide a framework within which Web Services can be reliably addressed, bound to, and utilized, at any time and from any location. The presented model eases the task of providing a Web Service by consuming deployment and management tasks. It eases the development of consumer agent applications by letting developers program against what a service does, not where it is or whether it is currently deployed. It extends the platform-independent ethos of Web Services by providing deployment mechanisms which can be used independent of implementation and deployment technologies. Crucially, it maintains the Web Service goal of universal interoperability, preserving each actor’s view upon the system so that existing Service Consumers and Service Providers can participate without any modifications to provider agent or consumer agent application code. Lastly, the model aims to enable the efficient consumption of hosting resources by providing mechanisms to dynamically apply and reclaim resources based upon measured consumer demand.
01 September 2014
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / Virtual worlds and massive multiplayer online games are amongst the most popular applications on the Internet. In order to host these applications a reliable architecture is required. It is essential for the architecture to handle high user loads, maintain a complex game state, promptly respond to game interactions, and prevent cheating, amongst other properties. Many of today’s Massive Multiplayer Online Games (MMOG) use client-server architectures to provide multiplayer service. Clients (players) send their actions to a server. The latter calculates the game state and publishes the information to the clients. Although the client-server architecture has been widely adopted in the past for MMOG, it suffers from many limitations. First, applications based on a client-server architecture are difficult to support and maintain given the dynamic user base of online games. Such architectures do not easily scale (or handle heavy loads). Also, the server constitutes a single point of failure. We argue that peer-to-peer architectures can provide better support for MMOG. Peer-to-peer architectures can enable the user base to scale to a large number. They also limit disruptions experienced by players due to other nodes failing. This research designs and implements a peer-to-peer architecture for MMOG. The peer-to-peer architecture aims at reducing message latency over the network and on the application layer. We refine the communication between nodes in the architecture to reduce network latency by using SPDY, a protocol designed to reduce web page load time. For the application layer, an event-driven paradigm was used to process messages. Through user load simulation, we show that our peer-to-peer design is able to process and reliably deliver messages in a timely manner. Furthermore, by distributing the work conducted by a game server, our research shows that a peer-to-peer architecture responds quicker to requests compared to client-server models.
On the access pricing and network scaling issues of wireless mesh networks. / On the access pricing & network scaling issues of wireless mesh networksJanuary 2006 (has links)
Lam Kong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 84-85). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related Work and Background --- p.7 / Chapter 2.1 --- Competition-free Unlimited Capacity Model´ؤOne-hop Case --- p.9 / Chapter 2.2 --- Competition-free Unlimited Capacity Model一Two-hop Case --- p.11 / Chapter 3 --- Extensions to Competition-free Unlimited Capacity Model --- p.13 / Chapter 3.1 --- Optimal Pricing for the One-hop Case under Various Utility Distributions --- p.13 / Chapter 3.2 --- Optimal Pricing for Competition-free Multi-hop Wireless Mesh Networks --- p.16 / Chapter 3.3 --- The Issue on Network Scaling --- p.22 / Chapter 4 --- Competition-free Limited Capacity Model --- p.28 / Chapter 4.1 --- One-hop Case --- p.28 / Chapter 4.2 --- Multi-hop Case --- p.36 / Chapter 5 --- Unlimited Capacity Model with Price Competition --- p.42 / Chapter 5.1 --- Renewed Game Model for Networks with Price Competition --- p.43 / Chapter 5.2 --- Pricing Equilibriums in Different Network Topologies --- p.46 / Chapter 5.2.1 --- Case A: Two Access Points Competing in a One-hop Network --- p.47 / Chapter 5.2.2 --- Case B: Two Access Points Competing in a Two-hop Network --- p.51 / Chapter 5.2.3 --- Case C: Two Resellers Competing in a Two-hop Network --- p.54 / Chapter 5.2.4 --- Case D: Extending Case A into a Multi-hop Network --- p.60 / Chapter 5.2.5 --- Case E: Extending Case C into a Multi-hop Network. --- p.66 / Chapter 5.2.6 --- The Unified Pricing Equilibrium --- p.68 / Chapter 5.2.7 --- Case F: The Characterizing Multi-hop Network --- p.75 / Chapter 5.3 --- Revisiting the Network Scaling Issue --- p.80 / Chapter 6 --- Conclusion --- p.82 / Bibliography --- p.84 / Chapter A --- Proof of the PBE for Competition-free Multi-hop Wireless Mesh Networks --- p.86 / Chapter B --- Proof of the Unified Pricing Equilibrium --- p.92
Kim, Jong Yul
This thesis looks in depth at telephony server clusters, the modern switchboards at the core of a packet-based telephony service. The most widely used de facto standard protocols for telecommunications are the Session Initiation Protocol (SIP) and the Real Time Protocol (RTP). SIP is a signaling protocol used to establish, maintain, and tear down communication channel between two or more parties. RTP is a media delivery protocol that allows packets to carry digitized voice, video, or text. SIP telephony server clusters that provide communications services, such as an emergency calling service, must be scalable and highly available. We evaluate existing commercial and open source telephony server clusters to see how they differ in scalability and high availability. We also investigate how a scalable SIP server cluster can be built on a cloud computing platform. Elasticity of resources is an attractive property for SIP server clusters because it allows the cluster to grow or shrink organically based on traffic load. However, simply deploying existing clusters to cloud computing platforms is not good enough to take full advantage of elasticity. We explore the design and implementation of clusters that scale in real-time. The database tier of our cluster was modified to use a scalable key-value store so that both the SIP proxy tier and the database tier can scale separately. Load monitoring and reactive threshold-based scaling logic is presented and evaluated. Server clusters also need to reduce processing latency. Otherwise, subscribers experience low quality of service such as delayed call establishment, dropped calls, and inadequate media quality. Cloud computing platforms do not guarantee latency on virtual machines due to resource contention on the same physical host. These extra latencies from resource contention are temporary in nature. Therefore, we propose and evaluate a mechanism that temporarily distributes more incoming calls to responsive SIP proxies, based on measurements of the processing delay in proxies. Availability of SIP server clusters is also a challenge on platforms where a node may fail anytime. We investigated how single component failures in a cluster can lead to a complete system outage. We found that for single component failures, simply having redundant components of the same type are enough to mask those failures. However, for client-facing components, smarter clients and DNS resolvers are necessary. Throughout the thesis, a prototype SIP proxy cluster is re-used, with variations in the architecture or configuration, to demonstrate and address issues mentioned above. This allows us to tie all of our approaches for different issues into one coherent system that is dynamically scalable, is responsive despite latency varations of virtual machines, and is tolerant of single component failures in cloud platforms.
12 July 2004
The goal of this thesis is to use and advance the techniques developed in the field of exact and approximation algorithms for many of the problems arising in the context of the Internet. We will formalize the method of dual fitting and the idea of factor-revealing LP. We use this combination to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61 respectively. We also provide the first polynomial time algorithm for the linear version of a market equilibrium model defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's primal-dual algorithm for bipartite matching. We also study the connectivity properties of the Internet graph and its impact on its structure. In particular, we consider the model of growth with preferential attachment for modeling the graph of the Internet and prove that under some reasonable assumptions, this graph has a constant conductance.
08 November 2007
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
Page generated in 0.1059 seconds