• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 244
  • 52
  • 33
  • 18
  • 10
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 453
  • 453
  • 203
  • 175
  • 131
  • 104
  • 95
  • 85
  • 71
  • 63
  • 60
  • 48
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Implementation of Wideband Multicarrier and Embedded GSM

Tsou, Thomas 26 October 2012 (has links)
The Global System for Mobile (GSM) cellular standard, having been in existence for over two decades, is the most widely deployed wireless technology in the world. While third generation networks and beyond, such as Universal Mobile Telecommunications System (UMTS) and Long Term Evolution (LTE), are undergoing extraordinary growth and driving a large share of current cellular development, technologies and deployments based on GSM are still dominant on a global scale and, like more recent standards, continue to evolve very rapidly. The software-defined radio (SDR) base station is one technology that is driving rapid change in cellular infrastructure. While commercial vendors have now embraced SDR, there is another movement that has recently gained prominence. That movement is the convergence of open source software and hardware with cellular implementation. OpenBTS, a deployable implementation of the GSM radio air interface, and the Universal Software Radio Peripheral (USRP), a RF hardware platform, are two primary examples of such open source software and hardware products. OpenBTS and the USRP underlie three GSM features that are implemented and presented in this thesis. This thesis describes the extension of the OpenBTS software-defined radio transceiver in the three critical areas of user capacity, transmit signal integrity, and the embedded small form factor. First, an optimized wideband multicarrier implementation is presented that substantially increases the capacity beyond that of a single carrier system. Second, the GSM modulator is examined in depth and extended to provide performance that exceeds standards compliance by a significant margin. Third, operation of the GSM transceiver on an E100 embedded platform with ARM and fixed point DSP processors will be explored, optimized, and tested. / Master of Science
242

Evaluation of GNU Radio Platform Enhanced for Hardware Accelerated Radio Design

Karve, Mrudula Prabhakar 05 January 2011 (has links)
The advent of software radio technology has enabled radio developers to design and implement radios with great ease and flexibility. Software radios are effective in experimentation and development of radio designs. However, they have limitations when it comes to high-speed, high-throughput designs. This limitation can be overcome by introducing a hardware element to the software radio platform. Enhancing GNU Radio for Hardware Accelerated Radio Design project implements such a scheme by augmenting an FPGA co-processor to a conventional GNU Radio flow. In this thesis, this novel platform is evaluated in terms of performance of a radio design, as well as hardware and software system requirements. A simple and efficient Zigbee receiver design is presented. Implementation of this receiver is used as a proof-of-concept for the effectiveness and design methodology of the modified GNU Radio. This work also proposes a scheme to extend this idea for design of ultra-wideband radio systems based on multiband-OFDM. / Master of Science
243

Enhancing security and scalability of Virtual Private LAN Services

Liyanage, M. (Madhusanka) 21 November 2016 (has links)
Abstract Ethernet based VPLS (Virtual Private LAN Service) is a transparent, protocol independent, multipoint L2VPN (Layer 2 Virtual Private Network) mechanism to interconnect remote customer sites over IP (Internet Protocol) or MPLS (Multiprotocol Label Switching) based provider networks. VPLS networks are now becoming attractive in many Enterprise applications, such as DCI (data center interconnect), voice over IP (VoIP) and videoconferencing services due to their simple, protocol-independent and cost efficient operation. However, these new VPLS applications demand additional requirements, such as elevated security, enhanced scalability, optimum utilization of network resources and further reduction in operational costs. Hence, the motivation of this thesis is to develop secure and scalable VPLS architectures for future communication networks. First, a scalable secure flat-VPLS architecture is proposed based on a Host Identity Protocol (HIP). It contains a session key-based security mechanism and an efficient broadcast mechanism that increase the forwarding and security plane scalability of VPLS networks. Second, a secure hierarchical-VPLS architecture is proposed to achieve control plane scalability. A novel encrypted label-based secure frame forwarding mechanism is designed to transport L2 frames over a hierarchical VPLS network. Third, a novel Distributed Spanning Tree Protocol (DSTP) is designed to maintain a loop free Ethernet network over a VPLS network. With DSTP it is proposed to run a modified STP (Spanning Tree Protocol) instance in each remote segment of the VPLS network. In addition, two Redundancy Identification Mechanisms (RIMs) termed Customer Associated RIMs (CARIM) and Provider Associated RIMs (PARIM) are used to mitigate the impact of invisible loops in the provider network. Lastly, a novel SDN (Software Defined Networking) based VPLS (Soft-VPLS) architecture is designed to overcome tunnel management limitations in legacy secure VPLS architectures. Moreover, three new mechanisms are proposed to improve the performance of legacy tunnel management functions: 1) A dynamic tunnel establishment mechanism, 2) a tunnel resumption mechanism and 3) a fast transmission mechanism. The proposed architecture utilizes a centralized controller to command VPLS tunnel establishment based on real-time network behavior. Hence, the results of the thesis will help for more secure, scalable and efficient system design and development of VPLS networks. It will also help to optimize the utilization of network resources and further reduction in operational costs of future VPLS networks. / Tiivistelmä Ethernet-pohjainen VPLS (Virtual Private LAN Service) on läpinäkyvä, protokollasta riippumaton monipisteverkkomekanismi (Layer 2 Virtual Private Network, L2VPN), jolla yhdistetään asiakkaan etäkohteet IP (Internet Protocol)- tai MPLS (Multiprotocol Label Switching) -yhteyskäytäntöön pohjautuvien palveluntarjoajan verkkojen kautta. VPLS-verkoista on yksinkertaisen protokollasta riippumattoman ja kustannustehokkaan toimintatapansa ansiosta tullut kiinnostavia monien yrityssovellusten kannalta. Tällaisia sovelluksia ovat esimerkiksi DCI (Data Center Interconnect), VoIP (Voice over IP) ja videoneuvottelupalvelut. Uusilta VPLS-sovelluksilta vaaditaan kuitenkin uusia asioita, kuten parempaa tietoturvaa ja skaalautuvuutta, optimaalista verkkoresurssien hyödyntämistä ja käyttökustannusten pienentämistä entisestään. Tämän väitöskirjan tarkoituksena onkin kehittää turvallisia ja skaalautuvia VPLS-arkkitehtuureja tulevaisuuden tietoliikenneverkoille. Ensin väitöskirjassa esitellään skaalautuva ja turvallinen flat-VPLS-arkkitehtuuri, joka perustuu Host Identity Protocol (HIP) -protokollaan. Seuraavaksi käsitellään istuntoavaimiin perustuvaa tietoturvamekanismia ja tehokasta lähetysmekanismia, joka parantaa VPLS-verkkojen edelleenlähetyksen ja tietoturvatason skaalautuvuutta. Tämän jälkeen esitellään turvallinen, hierarkkinen VPLS-arkkitehtuuri, jolla saadaan aikaan ohjaustason skaalautuvuus. Väitöskirjassa kuvataan myös uusi salattu verkkotunnuksiin perustuva tietokehysten edelleenlähetysmekanismi, jolla L2-kehykset siirretään hierarkkisessa VPLS-verkossa. Lisäksi väitöskirjassa ehdotetaan uuden Distributed Spanning Tree Protocol (DSTP) -protokollan käyttämistä vapaan Ethernet-verkkosilmukan ylläpitämiseen VPLS-verkossa. DSTP:n avulla on mahdollista ajaa muokattu STP (Spanning Tree Protocol) -esiintymä jokaisessa VPLS-verkon etäsegmentissä. Väitöskirjassa esitetään myös kaksi Redundancy Identification Mechanism (RIM) -mekanismia, Customer Associated RIM (CARIM) ja Provider Associated RIM (PARIM), joilla pienennetään näkymättömien silmukoiden vaikutusta palveluntarjoajan verkossa. Viimeiseksi ehdotetaan uutta SDN (Software Defined Networking) -pohjaista VPLS-arkkitehtuuria (Soft-VPLS) vanhojen turvallisten VPLS-arkkitehtuurien tunnelinhallintaongelmien poistoon. Näiden lisäksi väitöskirjassa ehdotetaan kolmea uutta mekanismia, joilla voidaan parantaa vanhojen arkkitehtuurien tunnelinhallintatoimintoja: 1) dynaaminen tunnelinluontimekanismi, 2) tunnelin jatkomekanismi ja 3) nopea tiedonsiirtomekanismi. Ehdotetussa arkkitehtuurissa käytetään VPLS-tunnelin luomisen hallintaan keskitettyä ohjainta, joka perustuu reaaliaikaiseen verkon käyttäytymiseen. Tutkimuksen tulokset auttavat suunnittelemaan ja kehittämään turvallisempia, skaalautuvampia ja tehokkaampia VLPS järjestelmiä, sekä auttavat hyödyntämään tehokkaammin verkon resursseja ja madaltamaan verkon operatiivisia kustannuksia.
244

Multi-operator greedy routing based on open routers / Routeurs ouverts avec routage glouton dans un contexte multi-opérateurs

Venmani, Daniel Philip 26 February 2014 (has links)
Les évolutions technologies mobiles majeures, tels que les réseaux mobiles 3G, HSPA+ et LTE, ont augmenté de façon significative la capacité des données véhiculées sur liaison radio. Alors que les avantages de ces évolutions sont évidents à l’usage, un fait moins connu est que ces améliorations portant principalement sur l’accès radio nécessitent aussi des avancées technologiques dans le réseau de collecte (backhaul) pour supporter cette augmentation de bande passante. Les fournisseurs d’accès Internet (FAI) et les opérateurs de réseau mobile doivent relever un réel défi pour accompagner l’usage des smartphones. Les coûts opérationnels associés aux méthodes traditionnelles de backhaul augmentent plus vite que les revenus générés par les nouveaux services de données. Ceci est particulièrement vrai lorsque le réseau backhaul doit lui-même être construit sur des liens radio. Un tel réseau de backhaul mobile nécessite (i) une gestion de qualité de service (QoS) liée au trafic avec des exigences strictes en matière de délai et de gigue, (ii) une haute disponibilité / fiabilité. Alors que la plupart des FAI et des opérateurs de réseau mobile font état des avantages de mécanismes de redondance et de résilience pour garantir une haute disponibilité, force est de constater que les réseaux actuels sont encore exposés à des indisponibilités. Bien que les causes de ces indisponibilités soient claires, les fluctuations rapides et / ou des pannes imprévues du trafic continuent d’affecter les plus grands opérateurs. Mais ces opérateurs ne pourraient-ils pas mettre en place des modèles et des mécanismes pour améliorer la survie des réseaux pour éviter de telles situations ? Les opérateurs de réseaux mobiles peuvent-ils mettre en place ensemble des solutions à faible coût qui assureraient la disponibilité et la fiabilité des réseaux ? Compte tenu de ce constat, cette thèse vise à : (i) fournir des solutions de backhaul à faible coût ; l’objectif est de construire des réseaux sans fil en ajoutant de nouvelles ressources à la demande plutôt que par sur-dimensionnements, en réponse à un trafic inattendu surgit ou à une défaillance du réseau, afin d’assurer une qualité supérieure de certains services (ii) fournir des communications sans interruption, y compris en cas de défaillance du réseau, mais sans redondance. Un léger focus porte sur l’occurrence de ce problème sur le lien appelé «dernier kilomètre» (last mile). Cette thèse conçoit une nouvelle architecture de réseaux backhaul mobiles et propose une modélisation pour améliorer la survie et la capacité de ces réseaux de manière efficace, sans reposer sur des mécanismes coûteux de redondance passive. Avec ces motivations, nous étudions le problème de partage de ressources d'un réseau de backhaul entre opérateurs concurrents, pour lesquelles un accord de niveau de service (SLA) a été conclu. Ainsi, nous présentons une étude systématique de solutions proposées portant sur une variété d’heuristiques de partage empiriques et d'optimisation des ressources. Dans ce contexte, nous poursuivons par une étude sur un mécanisme de recouvrement après panne qui assure efficacement et à faible coût la protection et la restauration de ressources, permettant aux opérateurs via une fonction basée sur la programmation par contraintes de choisir et établir de nouveaux chemins en fonction des modèles de trafic des clients finaux. Nous illustrons la capacité de survie des réseaux backhaul disposant d’un faible degré de redondance matérielle, par la gestion efficace d’équipements de réseau de backhaul répartis géographiquement et appartenant aux différents opérateurs, en s’appuyant sur des contrôleurs logiquement centralisés mais physiquement distribués, en respectant des contraintes strictes sur la disponibilité et la fiabilité du réseau / Revolutionary mobile technologies, such as high-speed packet access 3G (HSPA+) and LTE, have significantly increased mobile data rate over the radio link. While most of the world looks at this revolution as a blessing to their day-to-day life, a little-known fact is that these improvements over the radio access link results in demanding tremendous improvements in bandwidth on the backhaul network. Having said this, today’s Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are intemperately impacted as a result of this excessive smartphone usage. The operational costs (OPEX) associated with traditional backhaul methods are rising faster than the revenue generated by the new data services. Building a mobile backhaul network is very different from building a commercial data network. A mobile backhaul network requires (i) QoS-based traffic with strict requirements on delay and jitter (ii) high availability/reliability. While most ISPs and MNOs have promised advantages of redundancy and resilience to guarantee high availability, there is still the specter of failure in today’s networks. The problems of network failures in today’s networks can be quickly but clearly ascertained. The underlying observation is that ISPs and MNOs are still exposed to rapid fluctuations and/or unpredicted breakdowns in traffic; it goes without saying that even the largest operators can be affected. But what if, these operators could now put in place designs and mechanisms to improve network survivability to avoid such occurrences? What if mobile network operators can come up with low-cost backhaul solutions together with ensuring the required availability and reliability in the networks? With this problem statement in-hand, the overarching theme of this dissertation is within the following scopes: (i) to provide low-cost backhaul solutions; the motivation here being able to build networks without over-provisioning and then to bring-in new resources (link capacity/bandwidth) on occasions of unexpected traffic surges as well as on network failure conditions for particularly ensuring premium services (ii) to provide uninterrupted communications even at times of network failure conditions, but without redundancy. Here a slightly greater emphasis is laid on tackling the ‘last-mile’ link failures. The scope of this dissertation is therefore to propose, design and model novel network architectures for improving effective network survivability and network capacity, at the same time by eliminating network-wide redundancy, adopted within the context of mobile backhaul networks. Motivated by this, we study the problem of how to share the available resources of a backhaul network among its competitors, with whom a Service Level Agreement (SLA) has been concluded. Thus, we present a systematic study of our proposed solutions focusing on a variety of empirical resource sharing heuristics and optimization frameworks. With this background, our work extends towards a novel fault restoration framework which can cost-effectively provide protection and restoration for the operators, enabling them with a parameterized objective function to choose desired paths based on traffic patterns of their end-customers. We then illustrate the survivability of backhaul networks with reduced amount of physical redundancy, by effectively managing geographically distributed backhaul network equipments which belong to different MNOs using ‘logically-centralized’ physically-distributed controllers, while meeting strict constraints on network availability and reliability
245

Timing delay characterization of GNU Radio based 802.15.4 network using LimeSDR

Hazra, Saptarshi January 2018 (has links)
Massive deployment of diverse ultra-low power wireless devices necessitates the rapid development of communication protocols. Software Defined Radio (SDR) provides a flexible platform for deploying and evaluating real-world performance of these protocols. But SDR platform based communication systems suffer from high and unpredictable delays. There is a lack of comprehensive understanding of the characteristics of the delays experienced by these systems for new SDR platforms like LimeSDR. This knowledge gap needs to be filled in order to reduce these delays and better design protocols which can take advantage of these platforms. We design a GNU Radio based IEEE 802.15.4 experimental setup, where the data path is time-stamped at various points of interest to get a comprehensive understanding of the characteristics of the delays. Our analysis shows GNU Radio processing and LimeSDR buffering delay are the major delays in these data paths. We try to decrease the LimeSDR buffering delay by decreasing the USB transfer size but it comes at the cost of increased processing overhead. The USB transfer packet size is modified to investigate which USB transfer size provides the best balance between buffering delay and the processing overhead across two different host computers. Our experiments show that for the best-measured configuration the mean and jitter of latency decreases by 37% and 40% respectively for the host computer with higher processing resources. We also show that the throughput is not affected by these modifications. Higher processing resources help in handling higher processing overhead and can better reduce the buffering delay. / Stora installationer av heterogena extremt energisnåla trådlösa enheter ställer krav på snabb utveckling av kommunikationsprotokoll. Mjukvarubaserad radio (Software Defined Radio, SDR) tillhandahåller en flexibel plattform för att installera och utvärdera faktisk prestanda för dessa protokoll. Men SDR-baserade system har problem med stora och oförutsägbara fördröjningar. Verklig förståelse av hur dessa fördröjningar beter sig i nya plattform som LimeSDR saknas. Dessa kunskapsbrister behöver överbryggas för att kunna minska fördröjningarna och för att mer framgångsrikt kunna designa protokoll som drar nytta av de nya plattformarna. Vi skapar en försöksuppställning för IEEE 802.15.4 baserad på GNU Radio. Data som passerar systemet tidsstämplas för att ge underlag till att förstå fördröjningarnas egenskaper. Vår analys visar att fördröjningarna främst kommer från processande i GNU-radion och buffertider för LimeSDR. Vi försöker minska buffertiderna för LimeSDR genom att minska paketstorleken för USB-överföring, men det kommer till priset av ökade bearbetningskostnader. Paketstorleken för USB-överföring modifieras för att på två olika testdatorer undersöka den bästa balansen mellan buffertider och bearbetningskostnader. Våra experiment visar att för att den mest noggrant undersökta försöksuppställningen så minskar medelvärdet och jittret för fördröjningarna med 37% och 40% för testdatorn med mest beräkningskraft. Vi visar också att genomströmningen inte påverkas av dessa ändringar. Med mer beräkningskraft kan de ökade bearbetningskostnader hanteras, och buffertiderna kan förkortas mer effektivt.
246

Distributed Relay/Replay Attacks on GNSS Signals

Lenhart, Malte January 2022 (has links)
In modern society, Global Navigation Satellite Systems (GNSSs) are ubiquitously relied upon by many systems, among others in critical infrastructure, for navigation and time synchronization. To overcome the prevailing vulnerable state of civilian GNSSs, many detection schemes for different attack types (i.e., jamming and spoofing) have been proposed in literature over the last decades. With the launch of Galileo Open Service Navigation Message Authentication (OS­NMA), certain, but not all, types of GNSS spoofing are prevented. We therefore analyze the remaining attack surface of relay/replay attacks in order to identify a suitable and effective combination of detection schemes against these. One shortcoming in the evaluation of countermeasures is the lack of available test platforms, commonly limiting evaluation to mathematical description, simulation and/or test against a well defined set of recorded spoofing incidents. In order to allow researchers to test countermeasures against more diverse threats, this degree project investigates relay/replay attacks against GNSS signals in real­world setups. For this, we consider colluding adversaries, relaying/replaying on signal­ and on message­level in real­time, over consumer grade Internet, and with Commercially off the Shelf (COTS) hardware. We thereby highlight how effective and simple relay/replay attacks can be on existent and likely on upcoming authenticated signals. We investigate the requirements for such colluding attacks and present their limitations and impact, as well as highlight possible detection points. / Det moderna samhället förlitar sig på ständigt tillgängliga satellitnavigeringssystem (GNSSs) för navigering och tidssynkronisering i bland annat kritisk infrastruktur. För att åtgärda det rådande såbara tillståndet i civila GNSSs har många detektionssystem för olika attacktyper (dvs. jamming och förfalskning) blivit förslagna i den vetenskapliga litteraturen under de senaste årtiondena. Införandet av Galileo Open Service Navigation Message Authentication (OS NMA) förhindrar vissa, men inte alla typer av förfalskningsattacker. Därför analyserar vi den övriga angreppsytan för replay attacker för att identifiera en kvalificerad och effektiv kombination av detektionssystem emot dem. Ett tillkortakommande i utvärdering av detektionssystemen är bristen på tillgängliga testplattformar, vilket får konsekvensen att utvärderingen ofta är begränsad till matematiska beskrivningar, simuleringar, och/eller testning mot ett väldefinierat set av genererad förfalskningsattacker. För att hjälpa forskarna testa detektionssystemen mot mer varierade angrepp undersöker detta examensarbete replay attacker mot GNSS signaler i realistiska situationer. För dessa syften betraktar vi kollaborerande angripare som utför replay attacker på signal ­ och meddelandennivå i realtid över konsument­kvalité Internet med vanlig hårdvara. Vi framhäver därmed hur effektiva och enkla replay attacker kan vara mot befintliga och kommande autentiserade signaler. Vi undersöker förutsättningar för sådana kollaborerande attacker och presenterar deras begränsningar och verkan, samt möjliga kännetecken.
247

Especificación e implementación de un sistema de red definida por software con funciones virtuales adaptadas a despliegues de Internet de las cosas

Suárez de Puga García, Jara 21 March 2022 (has links)
[ES] La complejidad en la gestión de las redes de comunicación tradicionales, así como su poca escalabilidad y flexibilidad, supone un obstáculo para el desarrollo y consolidación de nuevas tecnologías emergentes como es el caso del Internet de las Cosas (Internet of Things), dónde la facilidad para el intercambio y manejo de grandes volúmenes de datos heterogéneos procedentes de sensores es un requisito clave para el correcto funcionamiento del sistema. El Internet de las Cosas se define cómo la interconexión digital de objetos cotidianos dotados de inteligencia (Smart devices) a través de redes de comunicación de datos ya sean públicas (Internet) o privadas. Sin embargo, el Internet de las Cosas no sólo está compuesto por estos dispositivos, toda la infraestructura, plataformas, aplicaciones y servicios que ayudan a los datos a viajar desde los dispositivos origen y hacia sus diferentes destinos, y la gestión de estos también forman parte del denominado Internet de las Cosas. El almacenamiento, análisis, procesado y gestión masiva de dichos datos es lo que se denomina Big Data, y está compuesto de grandes cantidades de datos (massive data) estructurados en diferentes formatos, modelos de datos y protocolos, lo que dificulta su tratamiento y su intercambio a través de las redes de datos convencionales. Ante esta problemática la implementación de redes virtuales definidas por software se presenta como una posible solución para dotar de flexibilidad, escalabilidad y sencillez de gestión a las redes que interconectan estos dispositivos, plataformas y otros elementos IoT, permitiendo una visión global, una gestión centralizada y un desarrollo de servicios a nivel de red específicos para los entornos de Internet de las Cosas. Este proyecto se presenta como una aproximación de estas dos tecnologías y tendrá como objetivo el diseño de una solución donde probar las herramientas de control de redes definidas por software o programables (SDN) y las funciones virtuales de redes (NFV) aplicadas a despliegues de Internet de las Cosas (IoT) de forma que se puedan demostrar sus ventajas e implicaciones y se puedan descubrir nuevas líneas de desarrollo sobre esta base. / [CA] La complexitat en la gestió de les xarxes de comunicació tradicionals, així com la seua poca escalabilitat i flexibilitat, suposa un obstacle per al desenvolupament i consolidació de noves tecnologies emergents com és el cas de la Internet de les Coses (Internet of Things), on la facilitat per a l'intercanvi i maneig de grans volums de dades heterogènies procedents de sensors és un requisit clau per al correcte funcionament del sistema. La Internet de les Coses es defineix com la interconnexió digital d'objectes quotidians dotats d'intel·ligència (Smart devices) a través de xarxes de comunicació de dades ja siguen públiques (Internet) o privades. No obstant això, la Internet de les Coses no sols està compost per aquests dispositius, tota la infraestructura, plataformes, aplicacions i serveis que ajuden les dades a viatjar des dels dispositius d'origen i cap a les seues diferents destinacions, i la gestió d'aquests també formen part de la denominada Internet de les Coses. L'emmagatzematge, anàlisi, processament i gestió massiva d'aquestes dades és el que es denomina Big Data, i està compost de grans quantitats de dades (massive data) estructurats en diferents formats, models de dades i protocols, la qual cosa dificulta el seu tractament i el seu intercanvi a través de les xarxes de dades convencionals. Davant aquesta problemàtica la implementació de xarxes virtuals definides per software es presenta com una possible solució per a dotar de flexibilitat, escalabilitat i senzillesa de gestió a les xarxes que interconnecten aquests dispositius, plataformes i altres elements IoT, permetent una visió global, una gestió centralitzada i un desenvolupament de serveis a nivell de xarxa específics per als entorns d'Internet de les Coses. Aquest projecte es presenta com una aproximació d'aquestes dues tecnologies i tindrà com a objectiu el disseny d'una solució on provar les eines de control de xarxes definides per software o programables (SDN) i les funcions virtuals de xarxes (NFV) aplicades a desplegaments d'Internet de les Coses (IoT) de manera que es puguen demostrar els seus avantatges i implicacions, i es puguen descobrir noves línies de desenvolupament sobre aquesta base. / [EN] Nowadays, the complexity of traditional network administration, together with the lack of scalability and flexibility, has been a challenge for the proper development and integration of new emerging technologies which make use of this network. As an example, we have the so-called Internet of Things (IoT). The principal IoT network requirement that enables the growth of this paradigm is the need to facilitate high data volume exchange and administration, from very heterogeneous sources. The IoT concept is defined as the digital interconnection of daily objects endowed with more "intelligence" (Smart devices) through a data communication network either public (Internet) or private. However, this technological trend does not only depend on the "smart devices", but on the whole infrastructure, platforms, frameworks, services, and applications that helps data to travel from the source devices to their different destinations. Also, the handling of the massive volumes of data extracted from those smart devices, their storage, processing, and analysis, known as Big Data, is a key part of this paradigm. This data is gathered from very different sources, and hence, it has diverse data structures and formats. Moreover, it is exchanged using various network protocols (LoRa, CoAp, etc.) which hinder its management and communication through conventional networks, that were not created for such data traffic. Given this problem, several technological approaches have emerged to solve it. Virtual software-defined networking is presented as a possible solution to provide flexibility, scalability, and simplicity of management to the networks that interconnect these devices, platforms, services, and other IoT elements. The virtualization of the network infrastructure, includes an extra layer of abstraction, thus providing a holistic vision of the network and centralizing the administration of its elements and the development of specific network services for IoT deployments. This project is presented as an approximation of these two technological paradigms and will have as the main objective the design of an architectural blueprint and testbed were testing the control tools of software-defined networks (SDN) and the virtualized network functions (NFV) applied to IoT deployments. Thereby, its advantages and implications can be evaluated, and new lines of development can be discovered on this base. / Suárez De Puga García, J. (2022). Especificación e implementación de un sistema de red definida por software con funciones virtuales adaptadas a despliegues de Internet de las cosas [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181555
248

Modelling, estimation and compensation of imbalances in quadrature transceivers

De Witt, Josias Jacobus 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The use of the quadrature mixing topology has been severely limited in the past due to its sensitivity towards mismatches between its signal paths. In recent years, researchers have suggested that digital techniques can be used to compensate for the impairments in the analogue quadrature mixing front-end. Most authors, however, focus on the modelling and compensation of frequency-independent imbalances, reasoning that this approach is sufficient for narrow band signal operation. This common assumption is, however, becoming increasing less applicable as the use of wider bandwidth signals and multi-channel systems becomes more prevalent. In this dissertation, baseband equivalent distortion models are derived, which model frequency-independent, as well as frequency-dependent contributions towards the imbalances of the front-end. Both lowpass and bandpass imbalances are modelled, which extends current modelling approaches found in literature. The resulting baseband models are shown to be capable of explaining the imbalance characteristics observed in practical quadrature mixing front ends, where existing models fail to do so. The developed imbalance models is then used to develop novel frequency-dependent imbalance extraction and compensation techniques, which directly extract the exact quadrature imbalances of the front end, using simple test tones. The imbalance extraction and compensation procedures are implemented in the digital baseband domain of the transceiver and do not require high computational complexity. The performance of these techniques are subsequently verified through simulations and a practical hardware implementation, yielding significant improvement in the image rejection capabilities of the quadrature mixing transceiver. Finally, a novel, blind imbalance compensation technique is developed. This technique is aimed at extracting frequency-independent I/Q imbalances in systems employing digital modulation schemes. No test tones are employed and the imbalances of the modulator and demodulator are extracted from the second order statistics of the received signal. Simulations are presented to investigate the performance of these techniques under various operating conditions. / AFRIKAANSE OPSOMMING: Die gebruik van die haaksfasige mengtopologie word geweldig beperk deur die sensitiwiteit vir wanbalanse wat mag bestaan tussen die twee analoog seinpaaie. In die afgelope paar jaar het navorsers digitale metodes begin voorstel om te kompenseer vir hierdie wanbalanse in die analooggebied. Meeste navorsers fokus egter op frekwensie-onafhanklike wanbalanse. Hulle staaf hierdie aanslag deur te redineer dat dit ’n aanvaarbare aaname is vir ’n nouband stelsel. Hierdie algemene aanvaarding is egter besig om minder akkuraat te raak, namate wyeband- en multikanaalstelses aan die orde van die dag raak. In hierdie tesis word basisband-ekwiwalente wanbelansmodelle afgelei wat poog om die effek van frekwensie-afhanklike en -onafhanklike wanbalanse akkuraat voor te stel. Beide laagdeurlaat- en banddeurlaatwanbalanse word gemodelleer, wat ‘n uitbreiding is op die huididge modellerings benaderings wat in literatuur gevind word. Dit word aangetoon dat die modelle van hierdie tesis daarin slaag om die karakteristieke van ’n werklike haaksfasige mengstelsel akkuraat te vervat – iets waarin huidige modelle in die literatuur nie slaag nie. Die basisband-ekwiwalente modelle word dan gebruik om nuwe digitale kompensasie metodes te ontwikkel, wat daarin slaag om die frekwensie-afhanklike wanbalanse van die haaksfasige mengstelsel af te skat, en daarvoor te kompenseer in die digitale deel van die stelsel. Hierdie kompensasiemetodes gebruik eenvoudige toetsseine om die wanbalanse af te skat. Die werksverrigting van hiedie kompensasiemetodes word dan ondersoek deur middel van simulasies en ’n praktiese hardeware-implementasie. Die resultate wys daarop dat hierdie metodes daarin slaag om ’n aansienlike verbetering in die beeldonderdrukkingsvermo¨ens van die haaksfasige mengers te weeg te bring. Laastens word daar ook ’n blinde kompensasiemetode ontwikkel, wat gemik is op frekwensie- onafhanklike wanbalanse in digital-modulasie-skama stelsels. Vir hierdie metodes is geen toetsseine nodig om die wanbalanse af te skat nie, en word dit gedoen vanuit die tweede-orde statistiek van die ontvangde sein. Die werksverrigting van hierdie tegnieke word verder bevestig deur middel van simulasies.
249

Impact of using cloud-based SDNcontrollers on the networkperformance

Henriksson, Johannes, Magnusson, Alexander January 2019 (has links)
Software-Defined Networking (SDN) is a network architecture that differs from traditionalnetwork planes. SDN has tree layers: infrastructure, controller, and application. Thegoal of SDN is to simplify management of larger networks by centralizing control into thecontroller layer instead of having it in the infrastructure. Given the known advantages ofSDN networks, and the flexibility of cloud computing. We are interested if this combinationof SDN and cloud services affects network performance, and what affect the cloud providersphysical location have on the network performance. These points are important whenSDN becomes more popular in enterprise networks. This seems like a logical next step inSDN, centralizing branch networks into one cloud-based SDN controller. These questionswere created with a literature studies and answered with an experimentation method. Theexperiments consist of two network topologies both locally hosted SDN (baseline) and cloudhosted SDN. The topology used Zodiac FX switches and Linux hosts. The following metricswas measured: throughput, latency, jitter, packet loss, and time to add new hosts. Theconclusion is that SDN as a cloud service is possible and does not significantly affect networkperformance. One limitation with this thesis was the hardware, resulting in big fluctuationin throughput and packet loss.
250

Soft Migration from Traditional to Software Defined Networks

Liver, Toma, Darian, Mohammed January 2019 (has links)
The concept of Software Defined Networking (SDN) may be a way to face the fast growing computer network infrastructure with its demands and requirements. The concept is attracting the interest of enterprises to expand their respective network infrastructures, but one has to consider the impacts of migrating from an existing network infrastructure to an SDN network. One way that could minimize the impacts is to proceed a soft migration from a traditional IP network to SDN, creating what is so called a heterogeneous network. Instead of fully replacing the network infrastructure and face the impacts of it, the idea of the soft migration is to replace a part of it with an environment of SDN and examine the performance of it. This thesis work will analyze the performance of a network consisting of a traditional IP network combined with SDN. It is essential during this work to identify the differences in performance when having a heterogeneous network in comparison with having a dedicated traditional IP network. Therefore, the questions that will be addressed during this thesis work is to examine how such a heterogeneous network can be designed and measure the performance of it in terms of throughput, jitter and packet losses. By the method of experimentation and the studying of related works of the SDN fundamentals, we hope to achieve our goals with this thesis work, to give us and the reader a clearer insight.

Page generated in 0.0502 seconds