Spelling suggestions: "subject:"scalability."" "subject:"calability.""
121 |
GMPLS-OBS interoperability and routing acalability in internetMendoça Pedroso, Pedro Miguel 16 December 2011 (has links)
The popularization of Internet has turned the telecom world upside down over the last two decades. Network operators, vendors and service providers are being challenged to adapt themselves to Internet requirements in a way to properly serve the huge number of demanding users (residential and business). The Internet (data-oriented network) is supported by an IP packet-switched architecture on top of a circuit-switched, optical-based architecture (voice-oriented network), which results in a complex and rather costly infrastructure to the transport of IP traffic (the dominant traffic nowadays). In such a way, a simple and IP-adapted network architecture is desired.
From the transport network perspective, both Generalized Multi-Protocol Label Switching (GMPLS) and Optical Burst Switching (OBS) technologies are part of the set of solutions to progress towards an IP-over-WDM architecture, providing intelligence in the control and management of resources (i.e. GMPLS) as well as a good network resource access and usage (i.e. OBS). The GMPLS framework is the key enabler to orchestrate a unified optical network control and thus reduce network operational expenses (OPEX), while increasing operator's revenues. Simultaneously, the OBS technology is one of the well positioned switching technologies to realize the envisioned IP-over-WDM network architecture, leveraging on the statistical multiplexing of data plane resources to enable sub-wavelength in optical networks. Despite of the GMPLS principle of unified control, little effort has been put on extending it to incorporate the OBS technology and many open questions still remain.
From the IP network perspective, the Internet is facing scalability issues as enormous quantities of service instances and devices must be managed. Nowadays, it is believed that the current Internet features and mechanisms cannot cope with the size and dynamics of the Future Internet. Compact Routing is one of the main breakthrough paradigms on the design of a routing system scalable with the Future Internet requirements. It intends to address the fundamental limits of current stretch-1 shortest-path routing in terms of RT scalability (aiming at sub-linear growth). Although "static" compact routing works fine, scaling logarithmically on the number of nodes even in scale-free graphs such as Internet, it does not handle dynamic graphs. Moreover, as multimedia content/services proliferate, the multicast is again under the spotlight as bandwidth efficiency and low RT sizes are desired. However, it makes the problem even worse as more routing entries should be maintained.
In a nutshell, the main objective of this thesis in to contribute with fully detailed solutions dealing both with i) GMPLS-OBS control interoperability (Part I), fostering unified control over multiple switching domains and reduce redundancy in IP transport. The proposed solution overcomes every interoperability technology-specific issue as well as it offers (absolute) QoS guarantees overcoming OBS performance issues by making use of the GMPLS traffic-engineering (TE) features. Keys extensions to the GMPLS protocol standards are equally approached; and ii) new compact routing scheme for multicast scenarios, in order to overcome the Future Internet inter-domain routing system scalability problem (Part II). In such a way, the first known name-independent (i.e. topology unaware) compact multicast routing algorithm is proposed. On the other hand, the AnyTraffic Labeled concept is also introduced saving on forwarding entries by sharing a single forwarding entry to unicast and multicast traffic type. Exhaustive simulation campaigns are run in both cases in order to assess the reliability and feasible of the proposals.
|
122 |
Algorithmic Aspects of the InternetSaberi, Amin 12 July 2004 (has links)
The goal of this thesis is to use and advance the techniques developed in the field of exact and approximation algorithms for many of the problems arising in the context of the Internet. We will formalize the method of dual fitting and the idea of factor-revealing LP. We use this combination to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61 respectively. We also provide the first polynomial time algorithm for the linear version of a market equilibrium model defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's primal-dual algorithm for bipartite matching. We also study the connectivity properties of the Internet graph and its impact on its structure. In particular, we consider the model of growth with preferential attachment for modeling the graph of the Internet and prove that under some reasonable assumptions, this graph has a constant conductance.
|
123 |
Fine Granularity Video Compression Technique and Its Application to Robust Video Transmission over Wireless InternetSu, Yih-ching 22 December 2003 (has links)
This dissertation deals with (a) fine granularity video compression technique and (b) its application to robust video transmission over wireless Internet. First, two wavelet-domain motion estimation algorithms, HMRME (Half-pixel Multi-Resolution Motion Estimation) and HSDD (Hierarchical Sum of Double Difference Metric), have been proposed to give wavelet-based FGS (Fine Granularity Scalability) video encoder with either low-complexity or high-performance features. Second, a VLSI-friendly high-performance embedded coder ABEC (Array-Based Embedded Coder) has been built to encode motion compensation residue as bitstream with fine granularity scalability. Third, the analysis of loss-rate prediction over Gilbert channel with loss-rate feedback, and several optimal FEC (Forward Error Correction) assignment schemes applicable for any real-time FGS video transmission system will be presented in this dissertation.
In addition to those theoretical works mentioned above, for future study on embedded systems for wireless FGS video transmission, an initiative FPGA-based MPEG-4 video encoder has also been implemented in this work.
|
124 |
Design and Implementation a Web-based Learning System on Server ClusterHo, Jiun-Huei 22 July 2005 (has links)
This dissertation presents a scalable web framework leaning system, Web-based Learning System (WebLS), addressing the distance learning scenario. Since the speed popularity of the Internet infrastructure and World Wide Web Services that have become the most commonly used information platform and an important medium for education; and expand to the Web-based e-Learning model. The Web-based e-Learning is not subject to the boundary of time or space that has greatly enhanced the effectiveness of online distance learning.
The WebLS aims at bringing together the most promising web technologies and standards, in order to attain a scalability and highly availability online learning environment. Moreover, the scalable web framework includes a SCORM based learning management system (named LMS), a server cluster infrastructure, a learning content management service, an information and content repository (named LMS database), and an agent system supporting the innovative solutions taken to implement scalability, availability, portability, reusability, and standardization.
The WebLS can store and provide Web access portal to learning contents from teachers, voluntaries, and institutions that lack resources or expertise to offer curriculums over the Internet.
So, in the first we design and implement the web-based learning managemnt sytem, Learning Management System (LMS), which conform the e-Learning standard, SCORM 1.2 specification, that established by ADL, and satisfy the requirements of the basic functionality at online web-based learning. Besides, in point of the research topic of learning behavior analysis, we propose a study result for extracting better learning path, Experience Matrix System with Time Fragment Extraction (EMST), which can analyse the learner¡¦s study behavior in Web-based learing environment. Then the information is used to explore, analyse students¡¦ learning path in order to find out the suitable learning path for the more learners.
As masses of learners concurrently enter the learning system, the system is often unable to serve such a massive workload, particularly during peak periods of learning activity. We use the server-cluster architecture as a way to create scalable and highly available solutions. However, hosting a variety of learning contents from different owners on such a distributed server system faces new design and management problems and requires new solutions. This dissertation describes the research work we are pursuing for constructing a system to address the challenges faced by hosting learning content on a server farm environment.
|
125 |
Adaptable, scalable, probabilistic fault detection and diagnostic methods for the HVAC secondary systemLi, Zhengwei 30 March 2012 (has links)
As the popularity of building automation system (BAS) increases, there is an increasing need to understand/analyze the HVAC system behavior with the monitoring data. However, the current constraints prevent FDD technology from being widely accepted, which include: 1)Difficult to understand the diagnostic results; 2)FDD methods have strong system dependency and low adaptability; 3)The performance of FDD methods is still not satisfactory; 4)Lack of information.
This thesis aims at removing the constraints, with a specific focus on air handling unit (AHU), which is one of the most common HVAC components in commercial buildings. To achieve the target, following work has been done in the thesis. On understanding the diagnostic results, a standard information structure including probability, criticality and risk is proposed. On improving method's adaptability, a low system dependency FDD method: rule augmented CUSUM method is developed and tested, another highly adaptable method: principal component analysis (PCA) method is
implemented and tested. On improving the overall FDD performance (detection sensitivity and diagnostic accuracy), a hypothesis that using integrated approach to combine different FDD methods could improve the FDD performance is proposed, both deterministic and probabilistic integration approaches are implemented to verify this hypothesis. On understanding the value of information, the FDD results for a testing system under different information availability scenarios are compared.
The results show that rule augmented CUSUM method is able to detect the abrupt faults and most incipient faults, therefore is a reliable method to use. The results also show that overall improvement of FDD method is possible using Bayesian integration approach, given accurate parameters (sensitivity and specificity), but
not guaranteed with deterministic integration approach, although which is simpler to use. The study of information availability reveals that most of the faults can be detected in low and medium information availability scenario, moving
further to high information availability scenario only slightly improves the diagnostic performance.
The key message from this thesis to the community is that: using Bayesian approach to integrate high adaptable FDD methods and delivering the results in a probability context is an optimal solution to remove the current constraints and push
FDD technology to a new position.
|
126 |
Scalable frameworks and algorithms for cluster ensembles and clustering data streamsHore, Prodip 01 June 2007 (has links)
Clustering algorithms are an important tool for data mining and data analysis purposes. Clustering algorithms fall under the category of unsupervised learning algorithms, which can group patterns without an external teacher or labels using some kind of similarity metric. Clustering algorithms are generally iterative in nature and computationally intensive. They will have disk accesses in every iteration for data sets larger than memory, making the algorithms unacceptably slow. Data could be processed in chunks, which fit into memory, to provide a scalable framework. Multiple processors may be used to process chunks in parallel. Clustering solutions from each chunk together form an ensemble and can be merged to provide a global solution. So, merging multiple clustering solutions, an ensemble, is important for providing a scalable framework.
Combining multiple clustering solutions or partitions, is also important for obtaining a robust clustering solution, merging distributed clustering solutions, and providing a knowledge reuse and privacy preserving data mining framework. Here we address combining multiple clustering solutions in a scalable framework. We also propose algorithms for incrementally clustering large or very large data sets. We propose an algorithm that can cluster large data sets through a single pass. This algorithm is also extended to handle clustering infinite data streams. These types of incremental/online algorithms can be used for real time processing as they don't revisit data and are capable of processing data streams under the constraint of limited buffer size and computational time. Thus, different frameworks/algorithms have been proposed to address scalability issues in different settings.
To our knowledge we are the first to introduce scalable algorithms for merging cluster ensembles, in terms of time and space complexity, on large real world data sets. We are also the first to introduce single pass and streaming variants of the fuzzy c means algorithm. We have evaluated the performance of our proposed frameworks/algorithms both on artificial and large real world data sets. A comparison of our algorithms with other relevant algorithms is discussed. These comparisons show the scalability and effectiveness of the partitions created by these new algorithms.
|
127 |
Scalable and network aware video coding for advanced communications over heterogeneous networksMuhammad, Sanusi January 2013 (has links)
This work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service. In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques. To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission. Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service. To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream.
|
128 |
A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applicationsWang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
|
129 |
Retrospect on contemporary Internet organization and its challenges in the futureGutierrez De Lara, Felipe 25 July 2011 (has links)
The intent of this report is to expose the audience to the contemporary organization of the Internet and to highlight the challenges it has to deal with in the future as well as the current efforts being made to overcome such threats. This report aims to build a frame of reference for how the Internet is currently structured and how the different layers interact together to make it possible for the Internet to exist as we know it. Additionally, the report explores the challenges the current Internet architecture design is facing, the reasons why these challenges are arising, and the multiple efforts taking place to keep the Internet working. In order to reach these objectives I visited multiple sites of organizations whose only reason for existence is to support the Internet and keep it functioning. The approach used to write this report was to research the topic by accessing multiple technical papers extracted from the IEEE database and network conferences reviews and to analyze and expose their findings. This report utilizes this
vii
information to elaborate on how network engineers are handling the challenges of keeping the Internet functional while supporting dynamic requirements. This report exposes the challenges the Internet is facing with scalability, the existence of debugging tools, security, mobility, reliability, and quality of service. It is explained in brief how each of these challenges are affecting the Internet and the strategies in place to vanquish them. The final objectives are to inform the reader of how the Internet is working with a set of ever changing and growing requirements, give an overview of the multiple institutions dedicated to reinforcing the Internet and provide a list of current challenges and the actions being taken to overcome them. / text
|
130 |
Video content analysis for automated detection and tracking of humans in CCTV surveillance applicationsTawiah, Thomas Andzi-Quainoo January 2010 (has links)
The problems of achieving high detection rate with low false alarm rate for human detection and tracking in video sequence, performance scalability, and improving response time are addressed in this thesis. The underlying causes are the effect of scene complexity, human-to-human interactions, scale changes, and scene background-human interactions. A two-stage processing solution, namely, human detection, and human tracking with two novel pattern classifiers is presented. Scale independent human detection is achieved by processing in the wavelet domain using square wavelet features. These features used to characterise human silhouettes at different scales are similar to rectangular features used in [Viola 2001]. At the detection stage two detectors are combined to improve detection rate. The first detector is based on shape-outline of humans extracted from the scene using a reduced complexity outline extraction algorithm. A Shape mismatch measure is used to differentiate between the human and the background class. The second detector uses rectangular features as primitives for silhouette description in the wavelet domain. The marginal distribution of features collocated at a particular position on a candidate human (a patch of the image) is used to describe statistically the silhouette. Two similarity measures are computed between a candidate human and the model histograms of human and non human classes. The similarity measure is used to discriminate between the human and the non human class. At the tracking stage, a tracker based on joint probabilistic data association filter (JPDAF) for data association, and motion correspondence is presented. Track clustering is used to reduce hypothesis enumeration complexity. Towards improving response time with increase in frame dimension, scene complexity, and number of channels; a scalable algorithmic architecture and operating accuracy prediction technique is presented. A scheduling strategy for improving the response time and throughput by parallel processing is also presented.
|
Page generated in 0.063 seconds