• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 27
  • 19
  • 12
  • 8
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 391
  • 135
  • 79
  • 63
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 40
  • 35
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Algorithmic Aspects of the Internet

Saberi, Amin 12 July 2004 (has links)
The goal of this thesis is to use and advance the techniques developed in the field of exact and approximation algorithms for many of the problems arising in the context of the Internet. We will formalize the method of dual fitting and the idea of factor-revealing LP. We use this combination to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61 respectively. We also provide the first polynomial time algorithm for the linear version of a market equilibrium model defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's primal-dual algorithm for bipartite matching. We also study the connectivity properties of the Internet graph and its impact on its structure. In particular, we consider the model of growth with preferential attachment for modeling the graph of the Internet and prove that under some reasonable assumptions, this graph has a constant conductance.
122

Fine Granularity Video Compression Technique and Its Application to Robust Video Transmission over Wireless Internet

Su, Yih-ching 22 December 2003 (has links)
This dissertation deals with (a) fine granularity video compression technique and (b) its application to robust video transmission over wireless Internet. First, two wavelet-domain motion estimation algorithms, HMRME (Half-pixel Multi-Resolution Motion Estimation) and HSDD (Hierarchical Sum of Double Difference Metric), have been proposed to give wavelet-based FGS (Fine Granularity Scalability) video encoder with either low-complexity or high-performance features. Second, a VLSI-friendly high-performance embedded coder ABEC (Array-Based Embedded Coder) has been built to encode motion compensation residue as bitstream with fine granularity scalability. Third, the analysis of loss-rate prediction over Gilbert channel with loss-rate feedback, and several optimal FEC (Forward Error Correction) assignment schemes applicable for any real-time FGS video transmission system will be presented in this dissertation. In addition to those theoretical works mentioned above, for future study on embedded systems for wireless FGS video transmission, an initiative FPGA-based MPEG-4 video encoder has also been implemented in this work.
123

Design and Implementation a Web-based Learning System on Server Cluster

Ho, Jiun-Huei 22 July 2005 (has links)
This dissertation presents a scalable web framework leaning system, Web-based Learning System (WebLS), addressing the distance learning scenario. Since the speed popularity of the Internet infrastructure and World Wide Web Services that have become the most commonly used information platform and an important medium for education; and expand to the Web-based e-Learning model. The Web-based e-Learning is not subject to the boundary of time or space that has greatly enhanced the effectiveness of online distance learning. The WebLS aims at bringing together the most promising web technologies and standards, in order to attain a scalability and highly availability online learning environment. Moreover, the scalable web framework includes a SCORM based learning management system (named LMS), a server cluster infrastructure, a learning content management service, an information and content repository (named LMS database), and an agent system supporting the innovative solutions taken to implement scalability, availability, portability, reusability, and standardization. The WebLS can store and provide Web access portal to learning contents from teachers, voluntaries, and institutions that lack resources or expertise to offer curriculums over the Internet. So, in the first we design and implement the web-based learning managemnt sytem, Learning Management System (LMS), which conform the e-Learning standard, SCORM 1.2 specification, that established by ADL, and satisfy the requirements of the basic functionality at online web-based learning. Besides, in point of the research topic of learning behavior analysis, we propose a study result for extracting better learning path, Experience Matrix System with Time Fragment Extraction (EMST), which can analyse the learner¡¦s study behavior in Web-based learing environment. Then the information is used to explore, analyse students¡¦ learning path in order to find out the suitable learning path for the more learners. As masses of learners concurrently enter the learning system, the system is often unable to serve such a massive workload, particularly during peak periods of learning activity. We use the server-cluster architecture as a way to create scalable and highly available solutions. However, hosting a variety of learning contents from different owners on such a distributed server system faces new design and management problems and requires new solutions. This dissertation describes the research work we are pursuing for constructing a system to address the challenges faced by hosting learning content on a server farm environment.
124

Adaptable, scalable, probabilistic fault detection and diagnostic methods for the HVAC secondary system

Li, Zhengwei 30 March 2012 (has links)
As the popularity of building automation system (BAS) increases, there is an increasing need to understand/analyze the HVAC system behavior with the monitoring data. However, the current constraints prevent FDD technology from being widely accepted, which include: 1)Difficult to understand the diagnostic results; 2)FDD methods have strong system dependency and low adaptability; 3)The performance of FDD methods is still not satisfactory; 4)Lack of information. This thesis aims at removing the constraints, with a specific focus on air handling unit (AHU), which is one of the most common HVAC components in commercial buildings. To achieve the target, following work has been done in the thesis. On understanding the diagnostic results, a standard information structure including probability, criticality and risk is proposed. On improving method's adaptability, a low system dependency FDD method: rule augmented CUSUM method is developed and tested, another highly adaptable method: principal component analysis (PCA) method is implemented and tested. On improving the overall FDD performance (detection sensitivity and diagnostic accuracy), a hypothesis that using integrated approach to combine different FDD methods could improve the FDD performance is proposed, both deterministic and probabilistic integration approaches are implemented to verify this hypothesis. On understanding the value of information, the FDD results for a testing system under different information availability scenarios are compared. The results show that rule augmented CUSUM method is able to detect the abrupt faults and most incipient faults, therefore is a reliable method to use. The results also show that overall improvement of FDD method is possible using Bayesian integration approach, given accurate parameters (sensitivity and specificity), but not guaranteed with deterministic integration approach, although which is simpler to use. The study of information availability reveals that most of the faults can be detected in low and medium information availability scenario, moving further to high information availability scenario only slightly improves the diagnostic performance. The key message from this thesis to the community is that: using Bayesian approach to integrate high adaptable FDD methods and delivering the results in a probability context is an optimal solution to remove the current constraints and push FDD technology to a new position.
125

Scalable frameworks and algorithms for cluster ensembles and clustering data streams

Hore, Prodip 01 June 2007 (has links)
Clustering algorithms are an important tool for data mining and data analysis purposes. Clustering algorithms fall under the category of unsupervised learning algorithms, which can group patterns without an external teacher or labels using some kind of similarity metric. Clustering algorithms are generally iterative in nature and computationally intensive. They will have disk accesses in every iteration for data sets larger than memory, making the algorithms unacceptably slow. Data could be processed in chunks, which fit into memory, to provide a scalable framework. Multiple processors may be used to process chunks in parallel. Clustering solutions from each chunk together form an ensemble and can be merged to provide a global solution. So, merging multiple clustering solutions, an ensemble, is important for providing a scalable framework. Combining multiple clustering solutions or partitions, is also important for obtaining a robust clustering solution, merging distributed clustering solutions, and providing a knowledge reuse and privacy preserving data mining framework. Here we address combining multiple clustering solutions in a scalable framework. We also propose algorithms for incrementally clustering large or very large data sets. We propose an algorithm that can cluster large data sets through a single pass. This algorithm is also extended to handle clustering infinite data streams. These types of incremental/online algorithms can be used for real time processing as they don't revisit data and are capable of processing data streams under the constraint of limited buffer size and computational time. Thus, different frameworks/algorithms have been proposed to address scalability issues in different settings. To our knowledge we are the first to introduce scalable algorithms for merging cluster ensembles, in terms of time and space complexity, on large real world data sets. We are also the first to introduce single pass and streaming variants of the fuzzy c means algorithm. We have evaluated the performance of our proposed frameworks/algorithms both on artificial and large real world data sets. A comparison of our algorithms with other relevant algorithms is discussed. These comparisons show the scalability and effectiveness of the partitions created by these new algorithms.
126

Scalable and network aware video coding for advanced communications over heterogeneous networks

Muhammad, Sanusi January 2013 (has links)
This work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service. In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques. To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission. Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service. To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream.
127

A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications

Wang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
128

Retrospect on contemporary Internet organization and its challenges in the future

Gutierrez De Lara, Felipe 25 July 2011 (has links)
The intent of this report is to expose the audience to the contemporary organization of the Internet and to highlight the challenges it has to deal with in the future as well as the current efforts being made to overcome such threats. This report aims to build a frame of reference for how the Internet is currently structured and how the different layers interact together to make it possible for the Internet to exist as we know it. Additionally, the report explores the challenges the current Internet architecture design is facing, the reasons why these challenges are arising, and the multiple efforts taking place to keep the Internet working. In order to reach these objectives I visited multiple sites of organizations whose only reason for existence is to support the Internet and keep it functioning. The approach used to write this report was to research the topic by accessing multiple technical papers extracted from the IEEE database and network conferences reviews and to analyze and expose their findings. This report utilizes this vii information to elaborate on how network engineers are handling the challenges of keeping the Internet functional while supporting dynamic requirements. This report exposes the challenges the Internet is facing with scalability, the existence of debugging tools, security, mobility, reliability, and quality of service. It is explained in brief how each of these challenges are affecting the Internet and the strategies in place to vanquish them. The final objectives are to inform the reader of how the Internet is working with a set of ever changing and growing requirements, give an overview of the multiple institutions dedicated to reinforcing the Internet and provide a list of current challenges and the actions being taken to overcome them. / text
129

Video content analysis for automated detection and tracking of humans in CCTV surveillance applications

Tawiah, Thomas Andzi-Quainoo January 2010 (has links)
The problems of achieving high detection rate with low false alarm rate for human detection and tracking in video sequence, performance scalability, and improving response time are addressed in this thesis. The underlying causes are the effect of scene complexity, human-to-human interactions, scale changes, and scene background-human interactions. A two-stage processing solution, namely, human detection, and human tracking with two novel pattern classifiers is presented. Scale independent human detection is achieved by processing in the wavelet domain using square wavelet features. These features used to characterise human silhouettes at different scales are similar to rectangular features used in [Viola 2001]. At the detection stage two detectors are combined to improve detection rate. The first detector is based on shape-outline of humans extracted from the scene using a reduced complexity outline extraction algorithm. A Shape mismatch measure is used to differentiate between the human and the background class. The second detector uses rectangular features as primitives for silhouette description in the wavelet domain. The marginal distribution of features collocated at a particular position on a candidate human (a patch of the image) is used to describe statistically the silhouette. Two similarity measures are computed between a candidate human and the model histograms of human and non human classes. The similarity measure is used to discriminate between the human and the non human class. At the tracking stage, a tracker based on joint probabilistic data association filter (JPDAF) for data association, and motion correspondence is presented. Track clustering is used to reduce hypothesis enumeration complexity. Towards improving response time with increase in frame dimension, scene complexity, and number of channels; a scalable algorithmic architecture and operating accuracy prediction technique is presented. A scheduling strategy for improving the response time and throughput by parallel processing is also presented.
130

Scalability and performance management of internet applications in the cloud

Dawoud, Wesam January 2013 (has links)
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost. / Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert. Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt. Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.

Page generated in 0.0428 seconds