• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optical flow routing : a routing and switching paradigm for the core optical networks

He, Jenny Jing January 2002 (has links)
No description available.
2

Implications of traffic characteristics on interdomain traffic engineering

Uhlig, Steve 02 March 2004 (has links)
This thesis discusses the implications of the traffic characteristics on interdomain traffic engineering with BGP. We first provide an overview of the interdomain traffic control problem. Then, we present results concerning the characteristics of the interdomain traffic, based on the analysis of real traffic traces gathered from non-transit ASes. We discuss the implications of the topological properties of the traffic on interdomain traffic engineering. Based on this knowledge of the traffic characteristics, we go on to study the complexity of designing interdomain traffic engineering techniques by defining the problem as an optimization problem. We show that designing traffic engineering techniques is possible but that several issues inherent to the current interdomain architecture make the task complex. Finally, we evaluate the current state-of-the-art of interdomain traffic engineering and discuss how we envision its future.
3

High-speed optical packet switching over arbitrary physical topologies using the Manhattan Street Network

Komolafe, Olufemi O. January 2001 (has links)
No description available.
4

Classification of encrypted cloud computing service traffic using data mining techniques

Qian, Cheng 27 February 2012 (has links)
In addition to the wireless network providers’ need for traffic classification, the need is more and more common in the Cloud Computing environment. A data center hosting Cloud Computing services needs to apply priority policies and Service Level Agreement (SLA) rules at the edge of its network. Overwhelming requirements about user privacy protection and the trend of IPv6 adoption will contribute to the significant growth of encrypted Cloud Computing traffic. This report presents experiments focusing on application of data mining based Internet traffic classification methods to classify encrypted Cloud Computing service traffic. By combining TCP session level attributes, client and host connection patterns and Cloud Computing service Message Exchange Patterns (MEP), the best method identified in this report yields 89% overall accuracy. / text
5

On the interactions of overlay routing

Lee, Gene Moo 24 August 2015 (has links)
Overlay routing has been successful as an incremental method to improve the current Internet routing by allowing users to select the Internet paths by themselves. By its nature, overlay routing has selfish behavior, which makes impact on the related components of the Internet routing. In this thesis, we study three interactions related to overlay routing. First, overlay routing changes the traffic patterns observed by the network operating side, which uses traffic engineering techniques to cope with the dynamic traffic demands. We improve this vertical interaction between overlay routing and traffic engineering. Secondly, the performance of overlay routing may be affected by the action of other coexisting overlays. An initial result on the horizontal interaction among multiple overlays is given. Lastly, within a single overlay network, overlay nodes can be regarded as independent decision makers, who act strategically to maximize individual gain. We design an incentive-based framework to achieve Pareto-optimality in the internal interaction of overlay routing.
6

Internet traffic modeling and forecasting using non-linear time series model GARCH

Anand, Chaoba Nikkie January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / Forecasting of network traffic plays a very important role in many domains such as congestion control, adaptive applications, network management and traffic engineering. Characterizing the traffic and modeling are necessary for efficient functioning of the network. A good traffic model should have the ability to capture prominent traffic characteristics, such as long-range dependence (LRD), self-similarity, and heavy-tailed distribution. Because of the persistent dependence, modeling LRD time series is a challenging task. In this thesis, we propose a non-linear time series model, Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) of order p and q, with innovation process generalized to the class of heavy-tailed distributions. The GARCH model is an extension of the AutoRegressive Conditional Heteroskedasticity (ARCH) model, has been used in many financial data analysis. Our model is fitted on a real data from the Abilene Network which is a high-performance Internet-2 backbone network connecting research institutions with 10Gbps bandwidth links. The analysis is done on 24 hours data of three different links aggregated every 5 minutes. The orders are selected based on the minimum modified Akaike Information Criterion (AICC) using Introduction to Time Series Modeling (ITSM) tool. For our model the best minimum order was found to be (1,1). The goodness of fit is evaluated based on the Q-Q (t-distributed) plot and the ACF plot of the residuals and our results confirm the goodness of fit of our model. The forecast analysis is done using a simple one-step prediction. The first 24 hrs of the data set are used as the training part to model the traffic; the next 24 hrs are used for performing the forecast and the comparison. The actual traffic data and the predicted traffic data is compared to evaluate the performance of the model. Based on the prediction error the performance metrics are evaluated. A comparative study of GARCH model with other existing models is performed and our results confirms the simplicity and the better performance of our model. The complexity of the model is measured based on the number of parameters to be estimated. From this study, the GARCH model is found to have the ability to forecast aggregated traffic but further investigation need to be conducted on a less aggregated traffic. Based on the forecast model developed from the GARCH model, we also intend to develop a dynamic bandwidth allocation algorithm as a future work.
7

Contributions to modelling of internet traffic by fractal renewal processes.

Arfeen, Muhammad Asad January 2014 (has links)
The principle of parsimonious modelling of Internet traffic states that a minimal number of descriptors should be used for its characterization. Until early 1990s, the conventional Markovian models for voice traffic had been considered suitable and parsimonious for data traffic as well. Later with the discovery of strong correlations and increased burstiness in Internet traffic, various self-similar count models have been proposed. But, in fact, such models are strictly mono-fractal and applicable at coarse time scales, whereas Internet traffic modelling is about modelling traffic at fine and coarse time scales; modelling traffic which can be mono and multi-fractal; modelling traffic at interarrival time and count levels; modelling traffic at access and core tiers; and modelling all the three structural components of Internet traffic, that is, packets, flows and sessions. The philosophy of this thesis can be described as: “the renewal of renewal theory in Internet traffic modelling”. Renewal theory has a great potential in modelling statistical characteristics of Internet traffic belonging to individual users, access and core networks. In this thesis, we develop an Internet traffic modelling framework based on fractal renewal processes, that is, renewal processes with underlying distribution of interarrival times being heavy-tailed. The proposed renewal framework covers packets, flows and sessions as structural components of Internet traffic and is applicable for modelling the traffic at fine and coarse time scales. The properties of superposition of renewal processes can be used to model traffic in higher tiers of the Internet hierarchy. As the framework is based on renewal processes, therefore, Internet traffic can be modelled at both interarrival times and count levels.
8

Fractal Network Traffic Analysis with Applications

Liu, Jian 19 May 2006 (has links)
Today, the Internet is growing exponentially, with traffic statistics that mathematically exhibit fractal characteristics: self-similarity and long-range dependence. With these properties, data traffic shows high peak-to-average bandwidth ratios and causes networks inefficient. These problems make it difficult to predict, quantify, and control data traffic. In this thesis, two analytical methods are used to study fractal network traffic. They are second-order self-similarity analysis and multifractal analysis. First, self-similarity is an adaptability of traffic in networks. Many factors are involved in creating this characteristic. A new view of this self-similar traffic structure related to multi-layer network protocols is provided. This view is an improvement over the theory used in most current literature. Second, the scaling region for traffic self-similarity is divided into two timescale regimes: short-range dependence (SRD) and long-range dependence (LRD). Experimental results show that the network transmission delay separates the two scaling regions. This gives us a physical source of the periodicity in the observed traffic. Also, bandwidth, TCP window size, and packet size have impacts on SRD. The statistical heavy-tailedness (Pareto shape parameter) affects the structure of LRD. In addition, a formula to estimate traffic burstiness is derived from the self-similarity property. Furthermore, studies with multifractal analysis have shown the following results. At large timescales, increasing bandwidth does not improve throughput. The two factors affecting traffic throughput are network delay and TCP window size. On the other hand, more simultaneous connections smooth traffic, which could result in an improvement of network efficiency. At small timescales, in order to improve network efficiency, we need to control bandwidth, TCP window size, and network delay to reduce traffic burstiness. In general, network traffic processes have a Hlder exponent a ranging between 0.7 and 1.3. Their statistics differ from Poisson processes. From traffic analysis, a notion of the efficient bandwidth, EB, is derived. Above that bandwidth, traffic appears bursty and cannot be reduced by multiplexing. But, below it, traffic is congested. An important finding is that the relationship between the bandwidth and the transfer delay is nonlinear.
9

An orchestration approach for unwanted internet traffic identification

FEITOSA, Eduardo Luzeiro 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:57:37Z (GMT). No. of bitstreams: 2 arquivo3214_1.pdf: 3789743 bytes, checksum: 5121a8308f93d20405e932f1e9bab193 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Universidade Federal do Amazonas / Um breve exame do atual tráfego Internet mostra uma mistura de serviços conhecidos e desconhecidos, novas e antigas aplicações, tráfego legítimo e ilegítimo, dados solicitados e não solicitados, tráfego altamente relevante ou simplesmente indesejado. Entre esses, o tráfego Internet não desejado tem se tornado cada vez mais prejudicial para o desempenho e a disponibilidade de serviços, tornando escasso os recursos das redes. Tipicamente, este tipo de tráfego é representado por spam, phishing, ataques de negação de serviço (DoS e DDoS), vírus e worms, má configuração de recursos e serviços, entre outras fontes. Apesar dos diferentes esforços, isolados e/ou coordenados, o tráfego Internet não desejado continua a crescer. Primeiramente, porque representa uma vasta gama de aplicações de usuários, dados e informações com diferentes objetivos. Segundo, devido a ineficácia das atuais soluções em identificar e reduzir este tipo de tráfego. Por último, uma definição clara do que é não desejado tráfego precisa ser feita. A fim de solucionar estes problemas e motivado pelo nível atingido pelo tráfego não desejado, esta tese apresenta: 1. Um estudo sobre o universo do tráfego Internet não desejado, apresentado definições, discussões sobre contexto e classificação e uma série de existentes e potencias soluções. 2. Uma metodologia para identificar tráfego não desejado baseada em orquestração. OADS (Orchestration Anomaly Detection System) é uma plataforma única para a identificação de tráfego não desejado que permite um gerenciamento cooperativa e integrado de métodos, ferramentas e soluções voltadas a identificação de tráfego não desejado. 3. O projeto e implementação de soluções modulares integráveis a metodologia proposta. A primeira delas é um sistema de suporte a recuperação de informações na Web (WIRSS), chamado OADS Miner ou simplesmente ARAPONGA, cuja função é reunir informações de segurança sobre vulnerabilidades, ataques, intrusões e anomalias de tráfego disponíveis na Web, indexá-las eficientemente e fornecer uma máquina de busca focada neste tipo de informação. A segunda, chamada Alert Pre- Processor, é um esquema que utilize uma técnica de cluster para receber múltiplas fontes de alertas, agregá-los e extrair aqueles mais relevantes, permitindo correlações e possivelmente a percepção das estratégias usadas em ataques. A terceira e última é um mecanismo de correlação e fusão de alertas, FER Analyzer, que utilize a técnica de descoberta de episódios frequentes (FED) para encontrar sequências de alertas usadas para confirmar ataques e possivelmente predizer futuros eventos. De modo a avaliar a proposta e suas implementações, uma série de experimentos foram conduzidos com o objetivo de comprovar a eficácia e precisão das soluções
10

Nouveaux paradigmes de contrôle de congestion dans un réseau d'opérateur / New paradigms for congestion control in an operator's network

Sanhaji, Ali 29 November 2016 (has links)
La congestion dans les réseaux est un phénomène qui peut influer sur la qualité de service ressentie par les utilisateurs. L’augmentation continue du trafic sur l’internet rend le phénomène de congestion un problème auquel l’opérateur doit répondre pour satisfaire ses clients. Les solutions historiques à la congestion pour un opérateur, comme le surdimensionnement des liens de son infrastructure, ne sont plus aujourd’hui viables. Avec l’évolution de l’architecture des réseaux et l’arrivée de nouvelles applications sur l’internet, de nouveaux paradigmes de contrôle de congestion sont à envisager pour répondre aux attentes des utilisateurs du réseau de l’opérateur. Dans cette thèse, nous examinons les nouvelles approches proposées pour le contrôle de congestion dans le réseau d’un opérateur. Nous proposons une évaluation de ces approches à travers des simulations, ce qui nous permet d’estimer leur efficacité et leur potentiel à être déployés et opérationnels dans le contexte d’internet, ainsi que de se rendre compte des défis qu’il faut relever pour atteindre cet objectif. Nous proposons également des solutions de contrôle de congestion dans des environnements nouveaux tels que les architectures Software Defined Networking et le cloud déployé sur un ou plusieurs data centers, où la congestion est à surveiller pour maintenir la qualité des services cloud offerts aux clients. Pour appuyer nos propositions d’architectures de contrôle de congestion, nous présentons des plateformes expérimentales qui démontrent le fonctionnement et le potentiel de nos solutions. / Network congestion is a phenomenon that can influence the quality of service experienced by the users. The continuous increase of internet traffic makes this phenomenon an issue that should be addressed by the network operator to satisfy its clients. The usual solutions to congestion, such as overdimensioning the infrastructure, are not viable anymore. With the evolution of the network architecture and the emergence of new internet applications, new paradigms for congestion control have to be considered as a response to the expectations of network users. In this thesis, we examine new approaches to congestion control in an operator’s network. We propose an evaluation of these approaches through simulations, which allows us to estimate their potential to be deployed and used over the internet, and allows us to be aware of the challenges in order to achieve this goal. We also provide solutions for congestion control in new environments such as Software- Defined Networking architectures and cloud computing deployed over many data centers, where congestion is to be monitored to maintain the quality of cloud services to its users. To support our proposals for congestion control architectures, we present experimental platforms that demonstrate the feasibility of our solutions.

Page generated in 0.0904 seconds