• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 38
  • 38
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Mobility as first class functionality : ILNPv6 in the Linux kernel

Phoomikiattisak, Ditchaphong January 2016 (has links)
Mobility is an increasingly important aspect of communication for the Internet. The usage of handheld computing devices such as tablets and smartphones is increasingly popular among Internet users. However, the current Internet protocol, IP, was not originally designed to support mobility over the Internet. Mobile users currently suffer from connection disruption when they move around. Once a device changes point of attachments between different wireless technology (vertical handoff) e.g. from WiFi to 3G, the IP address changes, and the bound session (e.g. TCP session) breaks. While the IETF Mobile IPv4 (MIPv4) and Mobile IPv6 (MIPv6) solutions have been defined for some time, and implementations are available, they have seen little deployment due to their complexity and performance. This thesis has examined how IP mobility can be supported as first class functionality, i.e. mobility can be enabled through the end hosts only, without changing the current network infrastructure. Current approaches such as MIPv6 require the use of proxies and tunnels which introduce protocol overhead and impact transport layer performance. The Identifier-Locator Network Protocol (ILNP) is an alternative approach which potentially works end-to-end, but this is yet to be tested. This thesis shows that ILNP provides mobility support as first class functionality, is implemented in an operating system kernel, and is accessible from the standard API without requiring changes to applications. Mobility management is controlled and managed by the end-systems, and does not require additional network-layer entities, only the end hosts need to be upgraded for ILNP to operate. This work demonstrates an instance of ILNP that is a superset of IPv6, called ILNPv6, that is implemented by extending the current IPv6 code in the Linux kernel. A direct performance comparison of ILNPv6 and MIPv6 is presented, showing the improved control and performance of ILNPv6, in terms of flow continuity, packet loss, handoff delay, and signalling overhead.

Automated Measurement and Change Detection of an Application’s Network Activity for Quality Assistance / Automatisk mätning och förändringsdetektering av en applikations nätverksaktivitet för kvalitetsstöd

Nissa Holmgren, Robert January 2014 (has links)
Network usage is an important quality metric for mobile apps. Slow networks, low monthly traffic quotas and high roaming fees restrict mobile users’ amount of usable Internet traffic. Companies wanting their apps to stay competitive must be aware of their network usage and changes to it. Short feedback loops for the impact of code changes are key in agile software development. To notify stakeholders of changes when they happen without being prohibitively expensive in terms of manpower the change detection must be fully automated. To further decrease the manpower overhead cost of implementing network usage change detection the system need to have low configuration requirements, and keep the false positive rate low while managing to detect larger changes. This thesis proposes an automated change detection method for network activity to quickly notify stakeholders with relevant information to begin a root cause analysis after a change in the network activity is introduced. With measurements of the Spotify’s iOS app we show that the tool achieves a low rate of false positives while detecting relevant changes in the network activity even for apps with dynamic network usage patterns as Spotify. / Nätverksaktivitet är ett viktigt kvalitetsmått för mobilappar. Mobilanvändare begränsas ofta av långsamma nätverk, låg månatlig trafikkvot och höga roamingavgifter. Företag som vill ha konkurrenskraftiga appar behöver vara medveten om deras nätverksaktivitet och förändringar av den. Snabb återkoppling för effekten av kodändringar är vitalt för agil programutveckling. För att underrätta intressenter om ändringar när de händer utan att vara avskräckande dyrt med avseende på arbetskraft måste ändringsdetekteringen vara fullständigt automatiserad. För att ytterligare minska arbetskostnaderna för ändringsdetektering av nätverksaktivitet måste detekteringssystemet vara snabbt att konfigurera, hålla en låg grad av felaktig detektering samtidigt som den lyckas identifiera stora ändringar. Den här uppsatsen föreslår ett automatiserat förändringsdetekteringsverktyg för nätverksaktivitet för att snabbt meddela stakeholders med relevant information för påbörjan av grundorsaksanalys när en ändring som påverkar nätverksaktiviteten introduceras. Med hjälp av mätningar på Spotifys iOS-app visar vi att verktyget når en låg grad av felaktiga detekteringar medan den identifierar ändringar i nätverksaktiviteten även för appar med så dynamisk nätverksanvändning som Spotify.

Protocol and System Design for a Service-centric Network Architecture

Huang, Xin 01 February 2010 (has links)
Next-generation Internet will be governed by the need for flexibility. Heterogeneous end-systems, novel applications, and security and manageability challenges require networks to provide a broad range of services that go beyond store-and-forward. Following this trend, a service-centric network architecture is proposed for the next-generation Internet. It utilizes router-based programmability to provide packet processing services inside the network and decompose communications into these service blocks. By providing different compositions of services along the data path, such network can customize its connections to satisfy various communication requirements. This design extends the flexibility of the Internet to meet its next-generation challenges. This work addresses three major challenges in implementing such service-centric networks. Finding the optimal path for a given composition of services is the first challenge. This is called "service routing" since both service availability and routing cost need to be considered. Novel algorithms and a matching protocol are designed to solve the service routing problem in large scale networks. A prototype based on Emulab is implemented to demonstrate and evaluate our design. Finding the optimal composition of services to satisfy the communication requirements of a given connection is the second challenge. This is called "service composition." A novel decision making framework is proposed, which allows the deduction of the service composition problem into a planning problem and automates the composition of service according to specified communication requirements. A further investigation shows that extending this decision making framework to combine the service routing and service composition problems yields a better solution than solving them separately. Run-time resource management on the data plane is the third challenge. Several run-time task mapping approaches have been proposed for Network Processor systems. An evaluation methodology based on queuing network is designed to systematically evaluate and compare these solutions under various network traffic scenarios. The results of this work give qualitative and quantitative insights into next-generation Internet design that combines issues from computer networking, architecture, and system design.

Towards a USB control area network

Golchin, Ahmad 01 February 2024 (has links)
Cyber-physical systems are computers equipped with sensors and actuators that enable them to interact with their surrounding environments. Ground vehicles, drones, and manufacturing robots are examples of such systems that require timing guarantees in addition to functional correctness to achieve their mission objectives. These systems often use multiple microcontroller boards for workload distribution and physical redundancy. The emergence of PC-class embedded systems featuring high processing capabilities and abundant resources presents an opportunity to consolidate separate microcontroller boards as software-defined functions into fewer computer systems. For instance, current automotive systems utilize upwards of 100 electronic control units (ECUs) for chassis, body, power-train, infotainment, and vehicle control services. Consolidation saves manufacturing costs, reduces wiring, simplifies packaging in space-limited situations, and streamlines software update delivery to end-users. However, consolidating functions on PC-class hardware does not address the real-time I/O challenges. A fundamental problem in such real-time solutions is the handling of device input and output in a timely manner. For example, a control system might require input data from a sensor to be sampled and processed regularly so that output signals to actuators occur within specific delay bounds. Input/output (I/O) devices connect to the host computer using different types of bus interfaces not necessarily supported by PC-class hardware natively. Examples of such interfaces include Controller Area Network (CAN) and FlexRay, which are prominent in the automotive world, but are not found in PC-class embedded systems. Universal Serial Bus (USB) is now ubiquitous in the PC-class domain, in part due to its support for many classes of devices with simplified hardware needed to connect to the host, and can be utilized to bridge this gap. USB provides the throughput and delay capabilities for next-generation high bandwidth sensors to be integrated with actuators in control area networks. However, typical USB host controller drivers suffer from potential timing delays that affect the delivery of data between tasks and devices. This Ph.D. thesis examines the use of Universal Serial Bus (USB) as the physical fabric for host-to-device and host-to-host communication, without special switching hardware or protocol translation logic, and through a unified programming interface. Combined with the real-time scheduling framework of the Quest RTOS, this work investigates how to form networks of I/O devices and computing nodes over USB with end-to-end timing guarantees. The main contribution of this thesis is a USB-centric design solution for real-time cyber-physical systems with distributed computing nodes.

IPv6: Politics of the Next Generation Internet

DeNardis, Laura Ellen 05 April 2006 (has links)
IPv6, a new Internet protocol designed to exponentially increase the global availability of Internet addresses, has served as a locus for incendiary international tensions over control of the Internet. Esoteric technical standards such as IPv6, on the surface, appear not socially significant. The technical community selecting IPv6 claimed to have excised sociological considerations from what they considered an objective technical design decision. Far from neutrality, however, the development and adoption of IPv6 intersects with contentious international issues ranging from tensions between the United Nations and the United States, power struggles between international standards authorities, U.S. military objectives, international economic competition, third world development objectives, and the promise of global democratic freedoms. This volume examines IPv6 in three overlapping epochs: the selection of IPv6 within the Internet's standards setting community; the adoption and promotion of IPv6 by various stakeholders; and the history of the administration and distribution of the finite technical resources of Internet addresses. How did IPv6 become the answer to presumed address scarcity? What were the alternatives? Once developed, stakeholders expressed diverse and sometimes contradictory expectations for IPv6. Japan, the European Union, China, India, and Korea declared IPv6 adoption a national priority and an opportunity to become more competitive in an American-dominated Internet economy. IPv6 activists espoused an ideological belief in IPv6, linking the standard with democratization, the eradication of poverty, and other social objectives. The U.S., with ample addresses, adopted a laissez-faire approach to IPv6 with the exception of the Department of Defense, which mandated an upgrade to the new standard to bolster distributed warfare capability. The history of IPv6 includes the history of the distribution of the finite technical resources of "IP addresses," globally unique binary numbers required for devices to exchange information via the Internet. How was influence over IP address allocation and control distributed globally? This history of IPv6 explains what's at stake economically, politically, and technically in the development and adoption of IPv6, suggesting a theoretical nexus between technical standards and politics and arguing that views lauding the Internet standards process for its participatory design approach ascribe unexamined legitimacy to a somewhat closed process. / Ph. D.

Engenharia de trafego multi-camada para grades / Multi-layer traffic engineering for grid networks

Batista, Daniel Macêdo 23 June 2006 (has links)
Orientadores: Nelson Luis Saldanha da Fonseca, Fabrizio Granelli / Dissertação (mestrado ) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T18:06:54Z (GMT). No. of bitstreams: 1 Batista_DanielMacedo_M.pdf: 1723518 bytes, checksum: b35136d03e434003ef1a7d13da25994f (MD5) Previous issue date: 2006 / Resumo: Grades são ambientes computacionais caracterizados pela heterogeneidade de recursos e dinamismo. Por serem ambientes dinâmicos, as grades precisam de processos que otimizem a execução das aplicações de forma também dinâmica. Tais processos devem detectar mudanças no estado da grade e tomar medidas para manter o tempo de execução das aplicações o menor possível. Existem diversas propostas de otimização dinâmica de aplicações em grades que visam atender essa necessidade através da migração de tarefas. Esta dissertação propõe uma metodologia que considera variações na disponibilidade dos hosts bem como no estado da rede. A metodologia proposta é baseada nos princípios gerais da engenharia de tráfego e atua em várias camadas da arquitetura Internet. Ela tem como objetivo minimizar o tempo de execução das aplicações e visa ser simples e independente, tanto da aplicação, quanto da grade. Os ganhos obtidos na execução de aplicações em grades com a utilização da proposta, versus a execução sem a mesma, são avaliados através de simulação com exemplos implementados usando o simulador de redes NS-2. Esta dissertação propõe também uma família de escalonadores baseados em programação inteira e em programação mista para o escalonamento de tarefas em grades que modelam o estado dos hosts bem como o da rede, sendo este o diferencial em relação às demais propostas na literatura / Abstract: Grids are dynamic and heterogeneous computing environments which require systematic methods for minimizing the execution time of applications. Such methods needs to detect changes on resource availability so that the execution time of applications can be kept low. The method introduced in this dissertation considers changes on the availability of hosts as well as on the availability of network resources. This method ressembles the Traffic Engineering for the Internet. It was validated via simulation using the NS-2 simulator. This dissertation also introduces a set of schedulers based on integer and mix programming which considers both host availability as well as network resources availability, differing from other proposals in the literature / Mestrado / Mestre em Ciência da Computação

Escalonadores de tarefas dependentes para grades robustos as incertezas das informações de entrada / Robust dependent task schedulers for grid networks

Batista, Daniel Macêdo 15 August 2018 (has links)
Orientador: Nelson Luis Saldanha da Fonseca / Tese (doutorado ) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T11:13:39Z (GMT). No. of bitstreams: 1 Batista_DanielMacedo_D.pdf: 4822882 bytes, checksum: 0875aace17a80193a116db65097ea804 (MD5) Previous issue date: 2010 / Resumo: Para que escalonadores em grades derivem escalonamentos, é necessário que se forneçam as demandas das aplicações e as disponibilidades dos recursos das grades. No entanto, a falta de controle centralizado, o desconhecimento dos usuários e a imprecisão das ferramentas de medição fazem com que as informações fornecidas aos escalonadores difiram dos valores reais que deveriam ser considerados para se obter escalonamentos quase-ótimos. A presente Tese introduz dois escalonadores de tarefas robustos às incertezas das informações providas como entrada ao escalonador. Um dos escalonadores lida com informações imprecisas sobre as demandas das aplicações, enquanto que o outro considera tanto imprecisões das demandas quanto da disponibilidade de recursos. A eficácia e a eficiência dos escalonadores robustos às incertezas são avaliadas através de simulação.Comparam-se os escalonamentos gerados pelos escalonadores robustos com os produzidos por escalonadores sensíveis às informações incertas. A eficácia de estimadores de largura de banda disponível são, também, avaliadas, através de medição, a luz da adoção destes em sistemas de grades, a fim de que se possa utilizar suas estimativas como informação de entrada a escalonadores robustos / Abstract: Schedulers need information on the application demands and on the grid resource availability as input to derive efficient schedules for the tasks of a grid application. However, information provided to schedulers differ from the true values due to the lack of central control in a grid and the lack of ownership of resources as well as the precision of estimations provided by measurement tools. This thesis introduces two robust schedulers based on fuzzy optimization. The first scheduler deals with uncertainties on the application demands while the other with uncertainties of both application demands and resource availability. The effectiveness of these schedulers are evaluated via simulation and the schedules produced by them are compared to those of their non-fuzzy counterpart. Moreover, the efficacy of available bandwidth estimators is assessed in order to evaluate their use in grid systems for providing schedulers with useful input information / Doutorado / Sistemas de Computação, Redes Multimidia / Doutor em Ciência da Computação

Optimization and Heuristics for Cognitive Radio Design

Bharath Keshavamurthy (8756067) 12 October 2021 (has links)
Cognitive Radio technologies have been touted to be instrumental in solving resource-allocation problems in resource-constrained radio environments. The adaptive computational intelligence of these radios facilitates the dynamic allocation of network resources--particularly, the spectrum, a scarce physical asset. In addition to consumer-driven innovation that is governing the wireless communication ecosystem, its associated infrastructure is being increasingly viewed by governments around the world as critical national security interests--the US Military instituted the DARPA Spectrum Collaboration Challenge which requires competitors to design intelligent radios that leverage optimization, A.I., and game-theoretic strategies in order to efficiently access the RF spectrum in an environment wherein every other competitor is vying for the same limited resources. In this work, we detail the design of our radio, i.e., the design choices made in each layer of the network protocol stack, strategies rigorously derived from convex optimization, the collaboration API, and heuristics tailor-made to tackle the unique scenarios emulated in this DARPA Grand Challenge. We present performance evaluations of key components of our radio in a variety of military and disaster-relief deployment scenarios that mimic similar real-world situations. Furthermore, specifically focusing on channel access in the MAC, we formulate the spectrum sensing and access problem as a POMDP; derive an optimal policy using approximate value iteration methods; prove that our strategy outperforms the state-of-the-art, and facilitates means to control the trade-off between secondary network throughput and incumbent interference; and evaluate this policy on an ad-hoc distributed wireless platform constituting ESP32 radios, in order to study its implementation feasibility.

Attack graph approach to dynamic network vulnerability analysis and countermeasures

Hamid, Thaier K. A. January 2014 (has links)
It is widely accepted that modern computer networks (often presented as a heterogeneous collection of functioning organisations, applications, software, and hardware) contain vulnerabilities. This research proposes a new methodology to compute a dynamic severity cost for each state. Here a state refers to the behaviour of a system during an attack; an example of a state is where an attacker could influence the information on an application to alter the credentials. This is performed by utilising a modified variant of the Common Vulnerability Scoring System (CVSS), referred to as a Dynamic Vulnerability Scoring System (DVSS). This calculates scores of intrinsic, time-based, and ecological metrics by combining related sub-scores and modelling the problem’s parameters into a mathematical framework to develop a unique severity cost. The individual static nature of CVSS affects the scoring value, so the author has adapted a novel model to produce a DVSS metric that is more precise and efficient. In this approach, different parameters are used to compute the final scores determined from a number of parameters including network architecture, device setting, and the impact of vulnerability interactions. An attack graph (AG) is a security model representing the chains of vulnerability exploits in a network. A number of researchers have acknowledged the attack graph visual complexity and a lack of in-depth understanding. Current attack graph tools are constrained to only limited attributes or even rely on hand-generated input. The automatic formation of vulnerability information has been troublesome and vulnerability descriptions are frequently created by hand, or based on limited data. The network architectures and configurations along with the interactions between the individual vulnerabilities are considered in the method of computing the Cost using the DVSS and a dynamic cost-centric framework. A new methodology was built up to present an attack graph with a dynamic cost metric based on DVSS and also a novel methodology to estimate and represent the cost-centric approach for each host’ states was followed out. A framework is carried out on a test network, using the Nessus scanner to detect known vulnerabilities, implement these results and to build and represent the dynamic cost centric attack graph using ranking algorithms (in a standardised fashion to Mehta et al. 2006 and Kijsanayothin, 2010). However, instead of using vulnerabilities for each host, a CostRank Markov Model has developed utilising a novel cost-centric approach, thereby reducing the complexity in the attack graph and reducing the problem of visibility. An analogous parallel algorithm is developed to implement CostRank. The reason for developing a parallel CostRank Algorithm is to expedite the states ranking calculations for the increasing number of hosts and/or vulnerabilities. In the same way, the author intends to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions of arcs (representing an action to move from one state to another). In this proposed approach, the focus on a parallel CostRank computational architecture to appraise the enhancement in CostRank calculations and scalability of of the algorithm. In particular, a partitioning of input data, graph files and ranking vectors with a load balancing technique can enhance the performance and scalability of CostRank computations in parallel. A practical model of analogous CostRank parallel calculation is undertaken, resulting in a substantial decrease in calculations communication levels and in iteration time. The results are presented in an analytical approach in terms of scalability, efficiency, memory usage, speed up and input/output rates. Finally, a countermeasures model is developed to protect against network attacks by using a Dynamic Countermeasures Attack Tree (DCAT). The following scheme is used to build DCAT tree (i) using scalable parallel CostRank Algorithm to determine the critical asset, that system administrators need to protect; (ii) Track the Nessus scanner to determine the vulnerabilities associated with the asset using the dynamic cost centric framework and DVSS; (iii) Check out all published mitigations for all vulnerabilities. (iv) Assess how well the security solution mitigates those risks; (v) Assess DCAT algorithm in terms of effective security cost, probability and cost/benefit analysis to reduce the total impact of a specific vulnerability.

Learning computer systems in a distributed project course : The what, why, how and where

Berglund, Anders January 2005 (has links)
Senior university students taking an internationally distributed project course in computer systems find themselves in a complex learning situation. To understand how they experience computer systems and act in their learning situation, the what, the why, the how and the where of their learning have been studied from the students’ perspective. The what aspect concerns the students’ understanding of concepts within computer systems: network protocols. The why aspect concerns the students’ objectives to learn computer systems. The how aspect concerns how the students go about learning. The where aspect concerns the students’ experience of their learning environment. These metaphorical entities are then synthesised to form a whole. The emphasis on the students’ experience of their learning motivates a phenomenographic research approach as the core of a study that is extended with elements of activity theory. The methodological framework that is developed from these research approaches enables the researcher to retain focus on learning, and specifically the learning of computer systems, throughout. By applying the framework, the complexity in the learning is unpacked and conclusions are drawn on the students’ learning of computer systems. The results are structural, qualitative, and empirically derived from interview data. They depict the students’ experience of their learning of computer systems in their experienced learning situation and highlight factors that facilitate learning. The results comprise sets of qualitatively different categories that describe how the students relate to their learning in their experienced learning environment. The sets of categories, grouped after the four components (what, why, how and where), are synthesised to describe the whole of the students’ experience of learning computer systems. This study advances the discussion about learning computer systems and demonstrates how theoretically anchored research contributes to teaching and learning in the field. Its multi-faceted, multi-disciplinary character invites further debate, and thus, advances the field.

Page generated in 0.2895 seconds