• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quality of service for high-speed interconnection networks onboard spacecraft

Ferrer Florit, Albert January 2013 (has links)
State-of-the-art onboard spacecraft avionics use SpaceWire networks to interconnect payload data-handling sub-systems. This includes high data-rate sensors and instruments, processing units, and memory devices. SpaceWire is an interconnection network composed of nodes and routers connected by bi-directional, point-to-point, high-speed, serial-data communication links. SpaceWire is established as one of the main data-handling protocols and is being used on many ESA, NASA and JAXA spacecraft. SpaceWire is very successful for being fast, flexible and simple to use and implement. However it does not implement Quality of Service mechanisms, which aim to provide guarantees in terms of reliability and timely delivery to data generated by network clients. Quality of Service is increasingly being deployed in commercial ground technologies and its availability for space applications, which requires high reliability and performance, is of high interest for the space community. This thesis researches how Quality of Service can be provided to existing SpaceWire networks. Existing solutions for ground-based technologies cannot be directly used because of the constraints imposed by the limitations of space-qualified electronics. Due to these limitations SpaceWire uses wormhole routing which has many benefits but makes it more challenging to obtain timing guarantees and to achieve a deterministic behaviour. These challenges are addressed in this work with a careful analysis of existing Quality of Service techniques and the implementation of a novel set of protocols specifically designed for SpaceWire networks. These new protocols target specific use cases and utilise different mechanisms to achieve the required reliability, timely delivery and determinism. Traditional and novel techniques are deployed for first time in SpaceWire networks. In particular, segmentation, acknowledgements, retry, time-division multiplexing an cross-layer techniques are considered, analysed, implemented and evaluated with extensive prototyping efforts. SpaceWire provides high-rate data transfers but the next generation of payload instruments are going to require multi-gigabit capabilities. SpaceFibre is a new onboard networking technology under development which aims to satisfy these new requirements, keeping compatibility with SpaceWire user-applications. As a new standard, SpaceFibre offers the opportunity to implement Quality of Service techniques without the limitations imposed by the SpaceWire standard. The last part of this thesis focuses on the specification of the SpaceFibre standard in order to provide the Quality of Service required by next generation of space applications. This work includes analytical studies, software simulations, and hardware prototyping of new concepts which are the basis of the Quality of Service mechanisms defined in the new SpaceFibre standard. Therefore, a critical contribution is made to the definition and evaluation of a novel Quality of Service solution which provides high reliability, bandwidth reservation, priority and deterministic delivery to SpaceFibre links.
2

Cost-effective Fault Tolerant Routing In Networks On Chip

Adanova, Venera 01 September 2008 (has links) (PDF)
Growing complexity of Systems on Chip (SoC) introduces interconnection problems. As a solution for communication bottleneck the new paradigm, Networks on Chip (NoC), has been proposed. Along with high performance and reliability, NoC brings in area and energy constraints. In this thesis we mainly concentrate on keeping communication-centric design environment fault-tolerant while considering area overhead. The previous researches suggest the adoption solution for fault-tolerance from multiprocessor architectures. However, multiprocessor architectures have excessive reliance on buffering leading to costly solutions. We propose to reconsider general router model by introducing central buffers which reduces buffer size. Besides, we offer a new fault-tolerant routing algorithm which effectively utilizes buffers at hand without additional buffers out of detriment to performance.
3

A hidden Markov model process for wormhole attack detection in a localised underwater wireless sensor network.

Obado, Victor Owino. January 2012 (has links)
M. Tech. Electrical Engineering. / Aims to develope a detection procedure whose objective function is to try as much as possible not to impact heavily on the resource constrained sensor nodes.
4

Performance modelling of wormhole-routed hypercubes with bursty traffice and finite buffers

Kouvatsos, Demetres D., Assi, Salam, Ould-Khaoua, Mohamed January 2005 (has links)
An open queueing network model (QNM) is proposed for wormhole-routed hypercubes with finite buffers and deterministic routing subject to a compound Poisson arrival process (CPP) with geometrically distributed batches or, equivalently, a generalised exponential (GE) interarrival time distribution. The GE/G/1/K queue and appropriate GE-type flow formulae are adopted, as cost-effective building blocks, in a queue-by-queue decomposition of the entire network. Consequently, analytic expressions for the channel holding time, buffering delay, contention blocking and mean message latency are determined. The validity of the analytic approximations is demonstrated against results obtained through simulation experiments. Moreover, it is shown that the wormholerouted hypercubes suffer progressive performance degradation with increasing traffic variability (burstiness).
5

Performance modelling of wormhole-routed hypercubes with bursty traffice and finite buffers

Kouvatsos, Demetres D., Assi, Salam, Ould-Khaoua, M. January 2005 (has links)
An open queueing network model (QNM) is proposed for wormhole-routed hypercubes with finite buffers and deterministic routing subject to a compound Poisson arrival process (CPP) with geometrically distributed batches or, equivalently, a generalised exponential (GE) interarrival time distribution. The GE/G/1/K queue and appropriate GE-type flow formulae are adopted, as cost-effective building blocks, in a queue-by-queue decomposition of the entire network. Consequently, analytic expressions for the channel holding time, buffering delay, contention blocking and mean message latency are determined. The validity of the analytic approximations is demonstrated against results obtained through simulation experiments. Moreover, it is shown that the wormholerouted hypercubes suffer progressive performance degradation with increasing traffic variability (burstiness).
6

Modeling, Design And Evaluation Of Networking Systems And Protocols Through Simulation

Lacks, Daniel Jonathan 01 January 2007 (has links)
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.

Page generated in 0.049 seconds