• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementation and capabilities of layered feed-forward networks

Richards, Gareth D. January 1990 (has links)
No description available.
2

Aggregation, dissemination and filtering : controlling complex information flows in networks

Banerjee, Siddhartha 25 October 2013 (has links)
Modern day networks, both physical and virtual, are designed to support increasingly sophisticated applications based on complex manipulation of information flows. On the flip side, the ever-growing scale of the underlying networks necessitate the use of low-complexity algorithms. Exploring this tension needs an understanding of the relation between these flows and the network structure. In this thesis, we undertake a study of three such processes: aggregation, dissemination and filtering. In each case, we characterize how the network topology imposes limits on these processes, and how one can use knowledge of the topology to design simple yet efficient control algorithms. Aggregation: We study data-aggregation in sensor networks via in-network computation, i.e., via combining packets at intermediate nodes. In particular, we are interested in maximizing the refresh-rate of repeated/streaming aggregation. For a particular class of functions, we characterize the maximum achievable refresh-rate in terms of the underlying graph structure; furthermore we develop optimal algorithms for general networks, and also a simple distributed algorithm for acyclic wired networks. Dissemination: We consider dissemination processes on networks via intrinsic peer-to-peer transmissions aided by external agents: sources with bounded spreading power, but unconstrained by the network. Such a model captures many static (e.g. long-range links) and dynamic/controlled (e.g. mobile nodes, broadcasting) models for long-range dissemination. We explore the effect of external sources for two dissemination models: spreading processes, wherein nodes once infected remain so forever, and epidemic process, in which nodes can recover from the infection. The main takeaways from our results demonstrate: (i) the role of graph structure, and (ii) the power of random strategies. In spreading processes, we show that external agents dramatically reduce the spreading time in networks that are spatially constrained; furthermore random policies are order-wise optimal. In epidemic processes, we show that for causing long-lasting epidemics, external sources must scale with the number of nodes -- however the strategies can be random. Filtering: A common phenomena in modern recommendation systems is the use of user-feedback to infer the 'value' of an item to other users, resulting in an exploration vs. exploitation trade-off. We study this in a simple natural model, where an 'access-graph' constrains which user is allowed to see which item, and the number of items and the number of item-views are of the same order. We want algorithms that recommend relevant content in an online manner (i.e., instantaneously on user arrival). To this end, we consider both finite-population (i.e., with a fixed set of users and items) and infinite-horizon settings (i.e., with user/item arrivals and departures) -- in each case, we design algorithms with guarantees on the competitive ratio for any arbitrary user. Conversely, we also present upper bounds on the competitive ratio, which show that in many settings our algorithms are orderwise optimal. / text
3

Nonlinear identification using local model networks

McLoone, Seamus Cornelius January 2000 (has links)
No description available.
4

Network Data Streaming: Algorithms for Network Measurement and Monitoring

Kumar, Abhishek 18 November 2005 (has links)
With the emergence of computer networks as one of the primary modes of communication, and with their adoption for an increasingly wide range of applications, there is a growing need to understand and characterize the traffic they carry. The rise of large scale network attacks adds urgency to this need. However, the large size, high speed and increasing complexity of these networks imply that tracking and characterizing the traffic they carry is an increasingly difficult problem. Dealing with higher level aggregates, such as flows instead of packets, does not solve the problem because these aggregates tend to be quite numerous and exhibit dynamics of their own. In this thesis, we investigate a novel approach to deal with the immense amounts of data associated with problems in network measurement and monitoring. Building upon the paradigm of Data Streaming, which processes a large stream of data using a small working memory to answer a class of queries, we develop an architecture for Network Data Streaming that can accommodate additional constraints imposed in the context of network monitoring. Using this architecture, we design algorithms for monitoring properties of network traffic that have traditionally been considered too difficult to monitor at high speed network links and routers. Our first algorithm provides the ability to accurately estimate the size of individual flows. A second algorithm to estimate the distribution of flow sizes enables network operators to monitor anomalies in the traffic. Incorporating the use of packet sampling, we can extend the latter algorithm to estimate the flow size distribution of arbitrary subpopulations. Finally, we apply the tools of Network Data Streaming to the operation of packet sampling itself. Using the ability to efficiently estimate flow-statistics such as approximate per-flow size, we design a family of mechanisms where the sampling decision is guided by this knowledge. The individual solutions developed in this thesis share a common architectural theme, supporting the monitoring of highly dynamic populations. Integrating this with the traditional sampling based framework for network monitoring will enable a broad range of applications for accurate and comprehensive monitoring of network traffic.
5

Grouper: A Packet Classification Algorithm Allowing Time-Space Tradeoffs

Kuhn, Joshua Adam 01 January 2011 (has links)
This thesis presents an algorithm for classifying packets according to arbitrary (including noncontiguous) bitmask rules. As its principal novelty, the algorithm is parameterized by the amount of memory available and can customize its data structures to optimize classification time without exceeding the given memory bound. The algorithm thus automatically trades time for space efficiency as needed. The two extremes of this time-space tradeoff (linear search through the rules versus a single table that maps every possible packet to its class number) are special cases of the general algorithm we present. Additional features of the algorithm include its simplicity, its open-source prototype implementation, its good performance even with worst-case rule sets, and its extendability to handle range rules and dynamic updates to rule sets. The contributions of this thesis first appeared in [1].
6

Multi-operator greedy routing based on open routers

Venmani, Daniel Philip 26 February 2014 (has links) (PDF)
Revolutionary mobile technologies, such as high-speed packet access 3G (HSPA+) and LTE, have significantly increased mobile data rate over the radio link. While most of the world looks at this revolution as a blessing to their day-to-day life, a little-known fact is that these improvements over the radio access link results in demanding tremendous improvements in bandwidth on the backhaul network. Having said this, today's Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are intemperately impacted as a result of this excessive smartphone usage. The operational costs (OPEX) associated with traditional backhaul methods are rising faster than the revenue generated by the new data services. Building a mobile backhaul network is very different from building a commercial data network. A mobile backhaul network requires (i) QoS-based traffic with strict requirements on delay and jitter (ii) high availability/reliability. While most ISPs and MNOs have promised advantages of redundancy and resilience to guarantee high availability, there is still the specter of failure in today's networks. The problems of network failures in today's networks can be quickly but clearly ascertained. The underlying observation is that ISPs and MNOs are still exposed to rapid fluctuations and/or unpredicted breakdowns in traffic; it goes without saying that even the largest operators can be affected. But what if, these operators could now put in place designs and mechanisms to improve network survivability to avoid such occurrences? What if mobile network operators can come up with low-cost backhaul solutions together with ensuring the required availability and reliability in the networks? With this problem statement in-hand, the overarching theme of this dissertation is within the following scopes: (i) to provide low-cost backhaul solutions; the motivation here being able to build networks without over-provisioning and then to bring-in new resources (link capacity/bandwidth) on occasions of unexpected traffic surges as well as on network failure conditions for particularly ensuring premium services (ii) to provide uninterrupted communications even at times of network failure conditions, but without redundancy. Here a slightly greater emphasis is laid on tackling the 'last-mile' link failures. The scope of this dissertation is therefore to propose, design and model novel network architectures for improving effective network survivability and network capacity, at the same time by eliminating network-wide redundancy, adopted within the context of mobile backhaul networks. Motivated by this, we study the problem of how to share the available resources of a backhaul network among its competitors, with whom a Service Level Agreement (SLA) has been concluded. Thus, we present a systematic study of our proposed solutions focusing on a variety of empirical resource sharing heuristics and optimization frameworks. With this background, our work extends towards a novel fault restoration framework which can cost-effectively provide protection and restoration for the operators, enabling them with a parameterized objective function to choose desired paths based on traffic patterns of their end-customers. We then illustrate the survivability of backhaul networks with reduced amount of physical redundancy, by effectively managing geographically distributed backhaul network equipments which belong to different MNOs using 'logically-centralized' physically-distributed controllers, while meeting strict constraints on network availability and reliability
7

Multi-operator greedy routing based on open routers / Routeurs ouverts avec routage glouton dans un contexte multi-opérateurs

Venmani, Daniel Philip 26 February 2014 (has links)
Les évolutions technologies mobiles majeures, tels que les réseaux mobiles 3G, HSPA+ et LTE, ont augmenté de façon significative la capacité des données véhiculées sur liaison radio. Alors que les avantages de ces évolutions sont évidents à l’usage, un fait moins connu est que ces améliorations portant principalement sur l’accès radio nécessitent aussi des avancées technologiques dans le réseau de collecte (backhaul) pour supporter cette augmentation de bande passante. Les fournisseurs d’accès Internet (FAI) et les opérateurs de réseau mobile doivent relever un réel défi pour accompagner l’usage des smartphones. Les coûts opérationnels associés aux méthodes traditionnelles de backhaul augmentent plus vite que les revenus générés par les nouveaux services de données. Ceci est particulièrement vrai lorsque le réseau backhaul doit lui-même être construit sur des liens radio. Un tel réseau de backhaul mobile nécessite (i) une gestion de qualité de service (QoS) liée au trafic avec des exigences strictes en matière de délai et de gigue, (ii) une haute disponibilité / fiabilité. Alors que la plupart des FAI et des opérateurs de réseau mobile font état des avantages de mécanismes de redondance et de résilience pour garantir une haute disponibilité, force est de constater que les réseaux actuels sont encore exposés à des indisponibilités. Bien que les causes de ces indisponibilités soient claires, les fluctuations rapides et / ou des pannes imprévues du trafic continuent d’affecter les plus grands opérateurs. Mais ces opérateurs ne pourraient-ils pas mettre en place des modèles et des mécanismes pour améliorer la survie des réseaux pour éviter de telles situations ? Les opérateurs de réseaux mobiles peuvent-ils mettre en place ensemble des solutions à faible coût qui assureraient la disponibilité et la fiabilité des réseaux ? Compte tenu de ce constat, cette thèse vise à : (i) fournir des solutions de backhaul à faible coût ; l’objectif est de construire des réseaux sans fil en ajoutant de nouvelles ressources à la demande plutôt que par sur-dimensionnements, en réponse à un trafic inattendu surgit ou à une défaillance du réseau, afin d’assurer une qualité supérieure de certains services (ii) fournir des communications sans interruption, y compris en cas de défaillance du réseau, mais sans redondance. Un léger focus porte sur l’occurrence de ce problème sur le lien appelé «dernier kilomètre» (last mile). Cette thèse conçoit une nouvelle architecture de réseaux backhaul mobiles et propose une modélisation pour améliorer la survie et la capacité de ces réseaux de manière efficace, sans reposer sur des mécanismes coûteux de redondance passive. Avec ces motivations, nous étudions le problème de partage de ressources d'un réseau de backhaul entre opérateurs concurrents, pour lesquelles un accord de niveau de service (SLA) a été conclu. Ainsi, nous présentons une étude systématique de solutions proposées portant sur une variété d’heuristiques de partage empiriques et d'optimisation des ressources. Dans ce contexte, nous poursuivons par une étude sur un mécanisme de recouvrement après panne qui assure efficacement et à faible coût la protection et la restauration de ressources, permettant aux opérateurs via une fonction basée sur la programmation par contraintes de choisir et établir de nouveaux chemins en fonction des modèles de trafic des clients finaux. Nous illustrons la capacité de survie des réseaux backhaul disposant d’un faible degré de redondance matérielle, par la gestion efficace d’équipements de réseau de backhaul répartis géographiquement et appartenant aux différents opérateurs, en s’appuyant sur des contrôleurs logiquement centralisés mais physiquement distribués, en respectant des contraintes strictes sur la disponibilité et la fiabilité du réseau / Revolutionary mobile technologies, such as high-speed packet access 3G (HSPA+) and LTE, have significantly increased mobile data rate over the radio link. While most of the world looks at this revolution as a blessing to their day-to-day life, a little-known fact is that these improvements over the radio access link results in demanding tremendous improvements in bandwidth on the backhaul network. Having said this, today’s Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are intemperately impacted as a result of this excessive smartphone usage. The operational costs (OPEX) associated with traditional backhaul methods are rising faster than the revenue generated by the new data services. Building a mobile backhaul network is very different from building a commercial data network. A mobile backhaul network requires (i) QoS-based traffic with strict requirements on delay and jitter (ii) high availability/reliability. While most ISPs and MNOs have promised advantages of redundancy and resilience to guarantee high availability, there is still the specter of failure in today’s networks. The problems of network failures in today’s networks can be quickly but clearly ascertained. The underlying observation is that ISPs and MNOs are still exposed to rapid fluctuations and/or unpredicted breakdowns in traffic; it goes without saying that even the largest operators can be affected. But what if, these operators could now put in place designs and mechanisms to improve network survivability to avoid such occurrences? What if mobile network operators can come up with low-cost backhaul solutions together with ensuring the required availability and reliability in the networks? With this problem statement in-hand, the overarching theme of this dissertation is within the following scopes: (i) to provide low-cost backhaul solutions; the motivation here being able to build networks without over-provisioning and then to bring-in new resources (link capacity/bandwidth) on occasions of unexpected traffic surges as well as on network failure conditions for particularly ensuring premium services (ii) to provide uninterrupted communications even at times of network failure conditions, but without redundancy. Here a slightly greater emphasis is laid on tackling the ‘last-mile’ link failures. The scope of this dissertation is therefore to propose, design and model novel network architectures for improving effective network survivability and network capacity, at the same time by eliminating network-wide redundancy, adopted within the context of mobile backhaul networks. Motivated by this, we study the problem of how to share the available resources of a backhaul network among its competitors, with whom a Service Level Agreement (SLA) has been concluded. Thus, we present a systematic study of our proposed solutions focusing on a variety of empirical resource sharing heuristics and optimization frameworks. With this background, our work extends towards a novel fault restoration framework which can cost-effectively provide protection and restoration for the operators, enabling them with a parameterized objective function to choose desired paths based on traffic patterns of their end-customers. We then illustrate the survivability of backhaul networks with reduced amount of physical redundancy, by effectively managing geographically distributed backhaul network equipments which belong to different MNOs using ‘logically-centralized’ physically-distributed controllers, while meeting strict constraints on network availability and reliability

Page generated in 0.0421 seconds