• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 47
  • 34
  • Tagged with
  • 377
  • 85
  • 70
  • 44
  • 44
  • 38
  • 37
  • 34
  • 33
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Speech quality assessment in communication networks with varying delay

Malfait, Ludovic January 2010 (has links)
This thesis discusses the assessment of speech quality transmitted through telecommunication networks. The aim is to produce a model able to estimate the overall listening quality of speech signals as measured by subjective tests. Objective models for speech quality assessment have been developed for the last twenty years and the most widely adopted is PESO. the currently in-force ITU-T Recommendation P .862. PESQ shows inaccuracy when assessing signals recorded from modern t elecommunication networks that exhibit highly va riable delay such as Voice over lP. This issue is investigated and addressed in this thesis. Objective models for qua lity assessment are generally designed to predict subjective tests, on which they are trained and verified. The behaviour of the model and its accu racy are therefore high ly dependent on the relia bility and the resolution of the subjective t ests. Some aspects of subjective test methodologies are disc ussed in this thes is. The most reliable speech quality. assessment models are perceptual algorithms that compare shortterm representations of the input and the o utput signals of the system under test. This type of model relies on an accurate estimation of the time re lat io nship between the two signa ls. This t hesis shows that the inaccuracy of PESQ for quality assessment of modern telecommunication networks is due to its t ime alignment. Previous time alignment methods for objective models are not suited to frequent delay variation. A new t ime-alignment technique based on correlation of frequency domain representations and sho rtterm delay histograms is presented, allowing robust alignment in t he presence of highly varying delay. A new objective model built from the integration of the proposed time-a lignment with PESQ was verified on a very large number of subjective tests. Results show signifi cant improvements over PESQ in situations presenting frequent delay va riations while keeping the similar level of accuracy in the cases of occasional variations.
72

Policy-driven traffic engineering in energy-aware ISP backbone networks

Francois, Frederic January 2013 (has links)
The excessive energy consumption of backbone networks is causing concerns among network operators. This thesis focuses on the design of Energy-aware Traffic Engineering (ETE) schemes which improve the energy-efficiency of different backbone networks by enabling the delivery of traffic by the smallest number of network devices so that the remaining devices can go to sleep during the periods of low traffic demands. The first proposed ETE scheme is called Time-driven Link Sleeping (TLS) which uses only two network routing topologies: the full topology with all links being active, and a reduced one with a subset of links sleeping. The key novelty of TLS lies in its ability to jointly optimize the reduced network topology and the off-peak period during which it is operated. Moreover, an extension to TLS makes it robust to single link failures. The second ETE scheme is a Green Load-balancing Algorithm (GLA) which complements TLS and other existing ETE schemes by jointly optimizing the IGP link weights in backbone networks for improved load-balancing and energy-efficiency after these existing ETE schemes put links to sleep. The final contribution is an online distributed ETE scheme called Green. Backup Paths (GBP) which dynamically diverts traffic from some selected links onto their backup paths, which were pre-installed to protect against link failure, so that these links have the opportunity to go to sleep without affecting the primary purpose of the backup paths. The distributed nature of GBP makes it scalable to large networks and be very responsive to sudden traffic changes since multiple routers can concurrently make interference-free decisions. The simple TLS scheme with GLA is ideally suited for networks which experience a regular traffic pattern because of their , offline nature while the more 'complex GBP scheme is more suitable when there is dynamic traffic because of its online nature.
73

Space-IP : context-aware QoS provisioning for long- distance networks and delay-sensitive applications

Peoples, Cathryn January 2009 (has links)
Delay Tolerant Networks (DTNs) challenge the operation of protocols used in terrestrial networks at the physical, medium access control (MAC), and transport layers of the protocol stack. Deep space, the most extreme DTN, is an environment where transmission reliability, accuracy, and timeliness may be more difficult to achieve than in terrestrial networks due to harsh and dynamic conditions. Transport protocols are being developed for use on the long- distance Interplanetary backbone, however, each has advantages and disadvantages in different scenarios. Future protocols developed must maximise link utilisation when connectivity exists, minimise communication attempts and wasted resource consumption when connectivity does not exist, and accommodate network congestion. Furthermore, performance should be achieved autonomously to minimise operating costs and optimise decision-making responses. Our approach, the Context-Aware Brokering (CAB) algorithm, is an autonomous middleware with context-aware capability which resides alongside any protocol stack between application and transport layers. It has capabilities to enable communication over both long- and short-distance links, and the version executed is optimised depending on the scenario. The CAB selects the most appropriate protocol for each transmission by comparing application requirements against environment constraints, intelligently configures it to maximise performance, and takes intermediary action when network dynamics compromise QoS levels. Its overall objective is allow transmission reliability, accuracy, latency, or sustainability QoS to be achieved, or enable a balance to be maintained between each, depending on the transmission scenario. Positive results are produced in a range of scenarios from an implementation of the CAB algorithm in ns-2.30. These are observed primarily when the CAB identifies environment constraints in relation to application requirements in advance of transmission beginning, and/or when the environment is dynamic and the CAB takes intermediary action to autonomously accommodate changes. A cost-benefit impact of the CAB architecture exists, and its positive impact on performance is most significant in dynamic environments which are long-distance and/or mobile when network conditions change in ways which are accommodated in the policy rules, and for applications without real-time and interactive transmission requirements. In a limited selection of scenarios, the CAB introduces overheads to the transmission without parallel improvements in performance. This is the case when pre-transmission assistance from the CAB is not required and the network remains stable until the transmission completes. Overhead costs are therefore incurred without any additional positive impacts from the CAB. The importance of deploying a CAB in all scenarios, however, is in the ability to exploit the potential of achieving a minimum level of service in the unpredictable and dynamic DTN.
74

Monitoring and analysis of network traffic for information security

Dupasquier, Benoit January 2013 (has links)
Network traffic monitoring and analysis has several practical implications. It can be used for malicious or legitimate purpose and aimed at improving the quality of communications, enhancing the security of a system or extracting information via side-channels. Such analysis can even deal with the use of encryption and obfuscation and extract meaningful information from huge amounts of Internet traffic. First, tills thesis explores its use to investigate the leakage of information from Skype, a widely used and encrypted VoIP application. VoIP has experienced tremendous growth over the last few years and is now widely used among the public and for business purposes. The security of such VoIP systems is often assumed, creating a false sense of privacy. Experiments have shown that isolated phonemes can be classified and given sentences identified. By using the DTW algorithm, frequently used in speech processing, an accuracy of 60% can be reached. The results can be further improved by choosing specific training data and reach an accuracy of 83% under specific conditions. The initial results being speaker dependent, an approach involving the Kalman filter is proposed to extract the kernel of all training signals. Second, the use of traffic monitoring and analysis for network security is investigated to detect hosts infected with the ZeuS botnet, a recent infamous trojan that steals banking information and one of the most prominent cyber threats to date. Cyber threats are becoming ever more sophisticated, persistent and difficult to detect. As highlighted by recent success stories of malware, such as the ZeuS botnet, current defence solutions are not enough to thwart these threats. Therefore, it is of paramount importance to be able to detect and mitigate these kinds of malware. This work proposes a detailed analysis of the network communications that occur between a bot and its master as part of the command and control traffic. This research identifies six key attributes which provide a reliable way of detecting hosts infected by the Zeus botnet. These discoveries are then used in combination with different machine learning algorithms in order to prove their validity. Finally, the use of IBM QRadar, a commercial SIEM product, to detect ZeuS infected hosts is investigated.
75

Phase-encoding : an event based scheme for on-chip signalling

D'Alessandro, Crescenzo Sabato January 2009 (has links)
This work describes the use of Phase-Encoding, a novel signalling scheme which encodes the items of data being sent into the order of events on a number of wires, all of which switch to indicate data transmission. The number of bits in each symbol depends on the factorial of the number of wires, increasing the number of bits per symbol; both rising and falling edges can be used as encoding events, doubling the bitrate for a given frequency in an efficient and economical manner. The signalling scheme can be used in asynchronous and synchronous environments, and is implemented using a source-synchronous approach; its main advantage consists in the "merging" of the clock and data Information into the data stream in an area- and, under some circumstances, power-efficient manner. Also, the probability of an induced fault on one or more wires affecting the signalling is limited to the "event window" where a symbol is actually sent; any induced fault outside this window is filtered out by the receiver.
76

Integrated Storage Area Networks (INSTANT)

Pranggono, Bernardi January 2009 (has links)
Storage Area Networks (SANs) have become an essential part of today's enterprise computing. SAN technologies are starting to be widely ~sed for data backup, recovery, and storage consolidation. INSTANT is a DTI-funded collaboration between two British universities: the University of Wales Swansea and the University of Cambridge and one industrial collaborator, ALPS. The principal objectives of the project are to devise a new form of fibre based high capacity distributed, dynamic, reconfigurable SAN and to enhance the performance and reconfigurability of the network using recently developed low cost Wavelength Division Multiplexing (WDM) technologies. In this thesis, we present results relating to a new metro WI?M storage demonstrator project, the INtegrated STorage Area NeTwork (INSTANT). Comprehensive literature review on SANs and WDM technologies was conducted. Based on the reviews, we propose novel SAN architecture and MAC protocols for SANs in metro WDM networks setting. Several possible solutions are proposed: storage area networking with fixed packet size, with variable packet size based on the real IP datagram distribution and with sectioned ring. \Ve then discuss and evaluate the network performance of the MAC protocol utilised in the proposed architectures. Performance analysis in real world scenarios and under realistic network traffic is performed. Two different traffic models were developed and implemented to simulate realistic traffic patterns. Both symmetric and asymmetric traffic scenarios are considered. Network performance in terms of throughput, delay and packet dropping probability are presented and studied. A novel data mirroring technique for metro WDM SANs in ring network architectures is also proposed and evaluated. Finally, traffic statistics in metro WDM SANs is also studied and discussed. Our approach considers all traffic happening during the conversation as a unique bidirectional flow of data, which we divided into upstream and downstream traffic. We separately study upstream and downstream traffic, building for each of them, estimates of number of packet and packet's inter-arrival time (IAT).
77

Self-organising, self-randomising peer-to-peer networks

Handley, Andrew Jobriath January 2013 (has links)
Peer to peer networks are self-organising communications networks formed from independent hosts. Their decentralised nature makes them massively scalable but complicates the task of global network coordination. How can such a network detect and repair topological damage in the absence of a a central authority? We present two classes of self-stabilising networks based on random regular graphs. Random regular graphs make excellent topologies for communications networks due to their small diameter, expansion properties, and extreme resilience to accumulated damage or attack. The networks presented here are self-stabilising in the sense that they automatically attempt to recover from illegal states. They are suitable for efficient search, peer discovery, or the construction of a Distributed Hash Table. In a random network, self-stabilisation does not imply the mainteriance of delicate structure, but the elimination of structure: self-randomisation. This work addresses small, individually fast randomising operations-the switch, the flip and the transposition-that occur spontaneously throughout the network without any form of coordination. These operations quickly randomise any connected network. Rigorous bounds exist that show they mix in time polynomial in the network size, and simulation results suggest an order of O( n log n) suffices. In studying the behaviour of these operations we give two novel extensions of the canonical path technique, a method to analyse the mixing rates of Markov chains. The first is a two-stage direct method to transfer mixing time bounds from one chain to another, similar to but distinct from Markov chain comparison. The directness of this approach allows for a much tighter bound than in previous work, especially when the two chains have distinct state spaces. The second is a method applicable when canonical path congestion is expected to be good but poor in the worst case. This bears a resemblance to Valiant's randomised routeing on the hypercube. Finally, we demonstrate a link between one of out network classes and Latin rectangles, a combinatorial object of independent interest. Our results on self-randomisation give the first fully polynomial randomised approximation schema for these objects.
78

Improvement of High Rate Wireless Personal Area Network Protocol (IEEE 802.15.3)

Sulaiman, Thafer January 2008 (has links)
The IEEE 802.15.3 is a High Rate (HR) Wireless Personal Area Network (WPAN) standard, which is standardised by IEEE in 2003. This standard is· MACIPHY specifications for wireless connections ofpersonal devices. This thesis proposes enhancements for the HR WPAN standard including the Piconet Coordinator (PNC) selection criteria and power management. The following topics have been studied, analysed, simulated and presented in this thesis: 1. Piconet Coordinator Selection Criteria 2. User Centricity in WPAN using the Personal Identification Device (PID) 3. User Detection (using the PID) 4. WPAN Routing Due to the vital functionality of the PNC in the WPAN, the most capable device in the WPAN is selected as the PNC. The original WPAN standard has presented a set of PNC criteria based on the PNC candidate potential capabilities such as the maximum transmitter power and the maximum association requests the device can handle. The proposed criteria are based on the actual PNC capabilities such as the number of devices directly connected to the PNC candidate. The proposed criteria extension field is designed and integrated within the standard model. The simulation results have proven that the WPAN functionality is improved when the proposed PNC criteria are applied compared to the original criteria. The concepts of user-centricity and user detection have been introduced to the WPAN in order to improve the network friendly-use and to enhance the standard security and data protection. According to the proposed PID design, the network detects the user vicinity and is activated automatically accordingly. The network activity is based on the user location and it follows the nearest device used by the network owner. This is a new feature that is added to the original WPAN standard. Finally, the WPAN routing issue has been studied and analysed. Mathematical model has been designed and analysed for piconet coverage. Accordingly, a new routing approach has been proposed based on the PNC enhancement for routing information.
79

A context information service for mobile applications

Wang, Feng January 2005 (has links)
No description available.
80

Routing in multi-class queueing networks

Kirkbride, Christopher January 2004 (has links)
We consider the problem of routing (incorporating local scheduling) in a distributed network. Dedicated jobs arrive directly at their specified station for processing. The choice of station for generic jobs is open. Each job class has an associated holding cost rate. We aim to develop routing policies to minimise the long-run average holding cost rate. We first consider the class of static policies. Dacre, Glazebrook and Nifio-Mora (1999) developed an approach to the formulation of static routing policies, in which the work at each station is scheduled optimally, using the achievable region approach. The achievable region approach attempts to solve stochastic optimisation problems by characterising the space of all possible performances and optimising the performance objective over this space. Optimal local scheduling takes the form of a priority policy. Such static routing policies distribute the generic traffic to the stations via a simple Bernoulli routing mechanism. We provide an overview of the achievements made in following this approach to static routing. In the course of this discussion we expand upon the study of Becker et al. (2000) in which they considered routing to a collection of stations specialised in processing certain job classes and we consider how the composition of the available stations affects the system performance for this particular problem. We conclude our examination of static routing policies with an investigation into a network design problem in which the number of stations available for processing remains to be determined. The second class of policies of interest is the class of dynamic policies. General DP theory asserts the existence of a deterministic, stationary and Markov optimal dynamic policy. However, a full DP solution may be unobtainable and theoretical difficulties posed by simple routing problems suggest that a closed form optimal policy may not be available. This motivates a requirement for good heuristic policies. We consider two approaches to the development of dynamic routing heuristics. We develop an idea proposed, in the context of simple single class systems, by Krishnan (1987) by applying a single policy improvement step to some given static policy. The resulting dynamic policy is shown to be of simple structure and easily computable. We include an investigation into the comparative performance of the dynamic policy with a number of competitor policies and of the performance of the heuristic as the number of stations in the network changes. In our second approach the generic traffic may only access processing when the station has been cleared of all (higher priority) jobs and can be considered as background work. We deploy a prescription of Whittle (1988) developed for RBPs to develop a suitable approach to station indexation. Taking an approximative approach to Whittle's proposal results in a very simple form of index policy for routing the generic traffic. We investigate the closeness to optimality of the index policy and compare the performance of both of the dynamic routing policies developed here.

Page generated in 0.0126 seconds