• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Traffic locality oriented route discovery algorithms for mobile ad hoc networks

Al-Rodhaan, Mznah A. January 2009 (has links)
There has been a growing interest in Mobile Ad hoc Networks (MANETs) motivated by the advances in wireless technology and the range of potential applications that might be realised with such technology. Due to the lack of an infrastructure and their dynamic nature, MANETs demand a new set of networking protocols to harness the full benefits of these versatile communication systems. Great deals of research activities have been devoted to develop on-demand routing algorithms for MANETs. The route discovery processes used in most on-demand routing algorithms, such as the Dynamic Source Routing (DSR) and Ad hoc On-demand Distance Vector (AODV), rely on simple flooding as a broadcasting technique for route discovery. Although simple flooding is simple to implement, it dominates the routing overhead, leading to the well-known broadcast storm problem that results in packet congestion and excessive collisions. A number of routing techniques have been proposed to alleviate this problem, some of which aim to improve the route discovery process by restricting the broadcast of route request packets to only the essential part of the network. Ideally, a route discovery should stop when a receiving node reports a route to the required destination. However, this cannot be achieved efficiently without the use of external resources; such as GPS location devices. In this thesis, a new locality-oriented route discovery approach is proposed and exploited to develop three new algorithms to improve the route discovery process in on-demand routing protocols. The proposal of our algorithms is motivated by the fact that various patterns of traffic locality occur quite naturally in MANETs since groups of nodes communicate frequently with each other to accomplish common tasks. Some of these algorithms manage to reduce end-to-end delay while incurring lower routing overhead compared to some of the existing algorithms such as simple flooding used in AODV. The three algorithms are based on a revised concept of traffic locality in MANETs which relies on identifying a dynamic zone around a source node where the zone radius depends on the distribution of the nodes with which that the source is “mostly” communicating. The traffic locality concept developed in this research form the basis of our Traffic Locality Route Discovery Approach (TLRDA) that aims to improve the routing discovery process in on-demand routing protocols. A neighbourhood region is generated for each active source node, containing “most” of its destinations, thus the whole network being divided into two non-overlapping regions, neighbourhood and beyond-neighbourhood, centred at the source node from that source node prospective. Route requests are processed normally in the neighbourhood region according to the routing algorithm used. However, outside this region various measures are taken to impede such broadcasts and, ultimately, stop them when they have outlived their usefulness. The approach is adaptive where the boundary of each source node’s neighbourhood is continuously updated to reflect the communication behaviour of the source node. TLRDA is the basis for the new three route discovery algorithms; notably: Traffic Locality Route Discovery Algorithm with Delay (TLRDA D), Traffic Locality Route Discovery Algorithm with Chase (TLRDA-C), and Traffic Locality Expanding Ring Search (TL-ERS). In TLRDA-D, any route request that is currently travelling in its source node’s beyond-neighbourhood region is deliberately delayed to give priority to unfulfilled route requests. In TLRDA-C, this approach is augmented by using chase packets to target the route requests associated with them after the requested route has been discovered. In TL-ERS, the search is conducted by covering three successive rings. The first ring covers the source node neighbourhood region and unsatisfied route requests in this ring trigger the generation of the second ring which is double that of the first. Otherwise, the third ring covers the whole network and the algorithm finally resorts to flooding. Detailed performance evaluations are provided using both mathematical and simulation modelling to investigate the performance behaviour of the TLRDA D, TLRDA-C, and TL-ERS algorithms and demonstrate their relative effectiveness against the existing approaches. Our results reveal that TLRDA D and TLRDA C manage to minimize end-to-end packet delays while TLRDA-C and TL-ERS exhibit low routing overhead. Moreover, the results indicate that equipping AODV with our new route discovery algorithms greatly enhance the performance of AODV in terms of end to end delay, routing overhead, and packet loss.
192

Dynamic trust negotiation for decentralised e-health collaborations

Ajayi, Oluwafemi O. January 2009 (has links)
In the Internet-age, the geographical boundaries that have previously impinged upon inter-organisational collaborations have become decreasingly important. Of more importance for such collaborations is the notion and subsequent nature of security and trust - this is especially so in open collaborative environments like the Grid where resources can be both made available, subsequently accessed and used by remote users from a multitude of institutions with a variety of different privileges spanning across the collaboration. In this context, the ability to dynamically negotiate and subsequently enforce security policies driven by various levels of inter-organisational trust is essential. Numerous access control solutions exist today to address aspects of inter-organisational security. These include the use of centralised access control lists where all collaborating partners negotiate and agree on privileges required to access shared resources. Other solutions involve delegating aspects of access right management to trusted remote individuals in assigning privileges to their (remote) users. These solutions typically entail negotiations and delegations which are constrained by organisations, people and the static rules they impose. Such constraints often result in a lack of flexibility in what has been agreed; difficulties in reaching agreement, or once established, in subsequently maintaining these agreements. Furthermore, these solutions often reduce the autonomous capacity of collaborating organisations because of the need to satisfy collaborating partners demands. This can result in increased security risks or reducing the granularity of security policies. Underpinning this is the issue of trust. Specifically trust realisation between organisations, between individuals, and/or between entities or systems that are present in multi-domain authorities. Trust negotiation is one approach that allows and supports trust realisation. The thesis introduces a novel model called dynamic trust negotiation (DTN) that supports n-tier negotiation hops for trust realisation in multi-domain collaborative environments with specific focus on e-Health environments. DTN describes how trust pathways can be discovered and subsequently how remote security credentials can be mapped to local security credentials through trust contracts, thereby bridging the gap that makes decentralised security policies difficult to define and enforce. Furthermore, DTN shows how n-tier negotiation hops can limit the disclosure of access control policies and how semantic issues that exist with security attributes in decentralised environments can be reduced. The thesis presents the results from the application of DTN to various clinical trials and the implementation of DTN to Virtual Organisation for Trials of Epidemiological Studies (VOTES). The thesis concludes that DTN can address the issue of realising and establishing trust between systems or agents within the e-Health domain, such as the clinical trials domain.
193

The dynamic counter-based broadcast for mobile ad hoc networks

Al-Humoud, Sarah Omar January 2011 (has links)
Broadcasting is a fundamental operation in mobile ad hoc networks (MANETs) crucial to the successful deployment of MANETs in practice. Simple flooding is the most basic broadcasting technique where each node rebroadcasts any received packet exactly once. Although flooding is ideal for its simplicity and high reachability it has a critical disadvantage in that it tends to generate excessive collision and consumes the medium by unneeded and redundant packets. A number of broadcasting schemes have been proposed in MANETs to alleviate the drawbacks of flooding while maintaining a reasonable level of reachability. These schemes mainly fall into two categories: stochastic and deterministic. While the former employs a simple yet effective probabilistic principle to reduce redundant rebroadcasts the latter typically requires sophisticated control mechanisms to reduce excessive broadcast. The key danger with schemes that aim to reduce redundant broadcasts retransmissions is that they often do so at the expense of a reachability threshold which can be required in many applications. Among the proposed stochastic schemes, is counter-based broadcasting. In this scheme redundant broadcasts are inhibited by criteria related to the number of duplicate packets received. For this scheme to achieve optimal reachability, it requires fairly stable and known nodal distributions. However, in general, a MANETs‟ topology changes continuously and unpredictably over time. Though the counter-based scheme was among the earliest suggestions to reduce the problems associated with broadcasting, there have been few attempts to analyse in depth the performance of such an approach in MANETs. Accordingly, the first part of this research, Chapter 3, sets a baseline study of the counter-based scheme analysing it under various network operating conditions. The second part, Chapter 4, attempts to establish the claim that alleviating existing stochastic counter-based scheme by dynamically setting threshold values according to local neighbourhood density improves overall network efficiency. This is done through the implementation and analysis of the Dynamic Counter-Based (DCB) scheme, developed as part of this work. The study shows a clear benefit of the proposed scheme in terms of average collision rate, saved rebroadcasts and end-to-end delay, while maintaining reachability. The third part of this research, Chapter 5, evaluates dynamic counting and tests its performance in some approximately realistic scenarios. The examples chosen are from the rapidly developing field of Vehicular Ad hoc Networks (VANETs). The schemes are studied under metropolitan settings, involving nodes moving in streets and lanes with speed and direction constraints. Two models are considered and implemented: the first assuming an unobstructed open terrain; the other taking account of buildings and obstacles. While broadcasting is a vital operation in most MANET routing protocols, investigation of stochastic broadcast schemes for MANETs has tended to focus on the broadcast schemes, with little examination on the impact of those schemes in specific applications, such as route discovery in routing protocols. The fourth part of this research, Chapter 6, evaluates the performance of the Ad hoc On-demand Distance Vector (AODV) routing protocol with a route discovery mechanism based on dynamic-counting. AODV was chosen as it is widely accepted by the research community and is standardised by the MANET IETF working group. That said, other routing protocols would be expected to interact in a similar manner. The performance of the AODV routing protocol is analysed under three broadcasting mechanisms, notably AODV with flooding, AODV with counting and AODV with dynamic counting. Results establish that a noticeable advantage, in most considered metrics can be achieved using dynamic counting with AODV compared to simple counting or traditional flooding. In summary, this research analysis the Dynamic Counter-Based scheme under a range of network operating conditions and applications; and demonstrates a clear benefit of the scheme when compared to its predecessors under a wide range of considered conditions.
194

Studies and a model of appropriation of information and communication technologies in university students’ everyday life

Rojas, Jose January 2011 (has links)
This thesis investigated the appropriation of information and communication technologies in everyday life among university students and mature people. To that end, pertinent literature was reviewed resulting in the identification of three issues in need of a more careful appraisal by the HCI field. These issues were used as the research questions propelling this work; they include the identification of elements favouring the process of appropriation; the effect of a changing context on this process; and the co-existence of seemingly overlapping ICTs in people’s lives. A qualitative methodology was utilised in the studies reported in this thesis. Ethnographic work was conducted over a period of three months with fifteen masters students at the University of Glasgow in the UK. Further ethnographic work over a shorter time frame was conducted abroad among university students at Hokkaido University in Japan, Ajou University in South Korea and Nankai University in China. Additional ethnographic work was conducted among mature people in a religious community in Mexico. Qualitative data gathered was analysed using Grounded Theory and Structuration Theory. Two are the main contributions of this work. First, a number of insights providing some answers to the research questions posited in this thesis. These answers were advanced as a complement and expansion to issues previously identified in the literature as relevant in the process of appropriation. Because of the ecological perspective underlying this thesis, these answers were presented as technology-neutral and yet useful to understand how the appropriation of technology is induced and sustained, what the impact of a changing environment in the process of appropriation is, and how similar technologies with overlapping features can thrive in the same environment. The second contribution of this work was a three-layered model of appropriation of ICTs built from the identification of common patterns across the studies conducted. This model sought to detail the role of several intersecting large-scale social processes or structures (i.e., governments, various-sized private and state-owned organisations, the media, families and peers, as well as marketing practices, technical infrastructures and architectural spaces) that provide the resources and restrictions upon which the process of appropiation of digital technology rests. This framework was advanced as a simple tool to aid HCI researchers in the collection, analysis and reporting of qualitative data around the process of appropriation as shaped by the pervasive social structures of contemporary society. The limitations of the ethnographic work here reported, as well as those of the ensuing conclusions, are identified and used to suggest some avenues of future exploration around the appropriation of ICTs in daily life.
195

A ranking framework and evaluation for diversity-based retrieval

Leelanupab, Teerapong January 2012 (has links)
There has been growing momentum in building information retrieval (IR) systems that consider both relevance and diversity of retrieved information, which together improve the usefulness of search results as perceived by users. Some users may genuinely require a set of multiple results to satisfy their information need as there is no single result that completely fulfils the need. Others may be uncertain about their information need and they may submit ambiguous or broad (faceted) queries, either intentionally or unintentionally. A sensible approach to tackle these problems is to diversify search results to address all possible senses underlying those queries or all possible answers satisfying the information need. In this thesis, we explore three aspects of diversity-based document retrieval: 1) recommender systems, 2) retrieval algorithms, and 3) evaluation measures. This first goal of this thesis is to provide an understanding of the need for diversity in search results from the users’ perspective. We develop an interactive recommender system for the purpose of a user study. Designed to facilitate users engaged in exploratory search, the system is featured with content-based browsing, aspectual interfaces, and diverse recommendations. While the diverse recommendations allow users to discover more and different aspects of a search topic, the aspectual interfaces allow users to manage and structure their own search process and results regarding aspects found during browsing. The recommendation feature mines implicit relevance feedback information extracted from a user’s browsing trails and diversifies recommended results with respect to document contents. The result of our user-centred experiment shows that result diversity is needed in realistic retrieval scenarios. Next, we propose a new ranking framework for promoting diversity in a ranked list. We combine two distinct result diversification patterns; this leads to a general framework that enables the development of a variety of ranking algorithms for diversifying documents. To validate our proposal and to gain more insights into approaches for diversifying documents, we empirically compare our integration framework against a common ranking approach (i.e. the probability ranking principle) as well as several diversity-based ranking strategies. These include maximal marginal relevance, modern portfolio theory, and sub-topic-aware diversification based on sub-topic modelling techniques, e.g. clustering, latent Dirichlet allocation, and probabilistic latent semantic analysis. Our findings show that the two diversification patterns can be employed together to improve the effectiveness of ranking diversification. Furthermore, we find that the effectiveness of our framework mainly depends on the effectiveness of the underlying sub-topic modelling techniques. Finally, we examine evaluation measures for diversity retrieval. We analytically identify an issue affecting the de-facto standard measure, novelty-biased discounted cumulative gain (α-nDCG). This issue prevents the measure from behaving as desired, i.e. assessing the effectiveness of systems that provide complete coverage of sub-topics by avoiding excessive redundancy. We show that this issue is of importance as it highly affects the evaluation of retrieval systems, specifically by overrating top-ranked systems that repeatedly retrieve redundant information. To overcome this issue, we derive a theoretically sound solution by defining a safe threshold on a query-basis. We examine the impact of arbitrary settings of the α-nDCG parameter. We evaluate the intuitiveness and reliability of α-nDCG when using our proposed setting on both real and synthetic rankings. We demonstrate that the diversity of document rankings can be intuitively measured by employing the safe threshold. Moreover, our proposal does not harm, but instead increases the reliability of the measure in terms of discriminative power, stability, and sensitivity.
196

Algorithmic aspects of stable matching problems

O'Malley, Gregg January 2007 (has links)
The Stable Marriage problem (SM), the Hospitals/Residents problem (HR) and the Stable Roommates problem (SR) are three classical stable matching problems that were first studied by Gale and Shapley in 1962. These problems have widespread practical application in centralised automated matching schemes, which assign applicants to posts based on preference lists and capacity constraints in both the UK and internationally. Within such schemes it is often the case that an agent's preference list may be incomplete, and agents may also be allowed to express indifference in the form of ties. In the presence of ties, three stability criteria can be defined, namely weak stability, strong stability and super-stability. In this thesis we consider stable matching problems from an algorithmic point of view. Some of the problems that we consider are derived from new stable matching models, whilst others are obtained from existing stable matching models involving ties and incomplete lists, with additional natural restrictions on the problem instance. Furthermore, we also explore the use of constraint programming with both SM and HR. We first study a new variant of the Student-Project Allocation problem in which each student ranks a set of acceptable projects in preference order and similarly each lecturer ranks his available projects in preference order. In this context, two stability definitions can be identified, namely weak stability and strong stability. We show that the problem of finding a maximum weakly stable matching is NP-hard. However, we describe two 2-approximation algorithms for this problem. Regarding strong stability, we describe a polynomial-time algorithm for finding such a matching or reporting that none exists. Next we investigate SM with ties and incomplete lists (SMTI), and HR with ties (HRT), where the length of each agent's list is subject to an upper bound. We present both polynomial-time algorithms and NP-hardness results for a range of problems that are derived from imposing upper bounds on the length of the lists on one or both sides. We also consider HRT, and SR with ties and incomplete lists (SRTI), where the preference lists of one or both sets of agents (as applicable) are derived from one or two master lists in which agents are ranked. For super-stability, in the case of each of HRT and SRTI with a master list, we describe a linear-time algorithm that simplifies the algorithm used in the general case. In the case of strong stability, for each of HRT and SRTI with a master list, we describe an algorithm that is faster than that for the general case. We also show that, given an instance I of SRTI with a master list, the problem of finding a weakly stable matching is polynomial-time solvable. However, we show that given such an I, the problem of finding a maximum weakly stable matching is NP-hard. Other new stable matching models that we study are the variants of SMTI and SRTI with symmetric preferences. In this context we consider two models that are derived from alternative ways of interpreting the rank of an agent in the presence of ties. For both models we show that deciding if a complete weakly stable matching exists is NP-complete. Then for one of the models we show that each of the problem of finding a minimum regret and an egalitarian weakly stable matching is NP-hard and that the problem of determining if a (man,woman) pair belongs to a weakly stable matching is NP-complete. We then describe algorithms for each of the problems of finding a super-stable matching and a strongly stable matching, or reporting that none exists, given instances of SRTI and HRT with symmetric preferences (regardless of how the ranks are interpreted). Finally, we use constraint programming techniques to model instances of SM and HR. We describe two encodings of SM in terms of a constraint satisfaction problem. The first model for SM is then extended to the case of HR. This encoding for HR is then extended to create a model for HRT under weak stability. Using this encoding we can obtain, with the aid of search, all the weakly stable matchings, given an instance of HRT.
197

A new parallelisation technique for heterogeneous CPUs

Gdura, Youssef Omran January 2012 (has links)
Parallelization has moved in recent years into the mainstream compilers, and the demand for parallelizing tools that can do a better job of automatic parallelization is higher than ever. During the last decade considerable attention has been focused on developing programming tools that support both explicit and implicit parallelism to keep up with the power of the new multiple core technology. Yet the success to develop automatic parallelising compilers has been limited mainly due to the complexity of the analytic process required to exploit available parallelism and manage other parallelisation measures such as data partitioning, alignment and synchronization. This dissertation investigates developing a programming tool that automatically parallelises large data structures on a heterogeneous architecture and whether a high-level programming language compiler can use this tool to exploit implicit parallelism and make use of the performance potential of the modern multicore technology. The work involved the development of a fully automatic parallelisation tool, called VSM, that completely hides the underlying details of general purpose heterogeneous architectures. The VSM implementation provides direct and simple access for users to parallelise array operations on the Cell’s accelerators without the need for any annotations or process directives. This work also involved the extension of the Glasgow Vector Pascal compiler to work with the VSM implementation as a one compiler system. The developed compiler system, which is called VP-Cell, takes a single source code and parallelises array expressions automatically. Several experiments were conducted using Vector Pascal benchmarks to show the validity of the VSM approach. The VP-Cell system achieved significant runtime performance on one accelerator as compared to the master processor’s performance and near-linear speedups over code runs on the Cell’s accelerators. Though VSM was mainly designed for developing parallelising compilers it also showed a considerable performance by running C code over the Cell’s accelerators.
198

Parameterised verification of randomised distributed systems using state-based models

Graham, Douglas January 2008 (has links)
Model checking is a powerful technique for the verification of distributed systems but is limited to verifying systems with a fixed number of processes. The verification of a system for an arbitrary number of processes is known as the parameterised model checking problem and is, in general, undecidable. Parameterised model checking has been studied in depth for non-probabilistic distributed systems. We extend some of this work in order to tackle the parameterised model checking problem for distributed protocols that exhibit probabilistic behaviour, a problem that has not been widely addressed to date. In particular, we consider the application of network invariants and explicit induction to the parameterised verification of state-based models of randomised distributed systems. We demonstrate the use of network invariants by constructing invariant models for non-probabilistic and probabilistic forms of a simple counter token ring protocol. We show that proving properties of the invariants equates to proving properties of the token ring protocol for any number of processes. The use of induction is considered for the verification of a class of randomised distributed systems. These systems, termed degenerative, have the property that a model of a system with given communication graph eventually behaves like a model of a system with a reduced graph, where reduction is by removal of a set of nodes. We distinguish between deterministically, probabilistically and semi-degenerative systems, according to the manner in which a system degenerates. For the former two classes we describe induction schemas for reasoning about models of these systems over arbitrary communication graphs. We show that certain properties hold for models of such systems with any graph if they hold for all models of a system with some base graph and demonstrate this via case studies: two randomised leader election protocols. We illustrate how induction can also be employed to prove properties of semi-degenerative systems by considering a simple gossip protocol.
199

A coarse-grained dynamically reconfigurable MAC processor for power-sensitive multi-standard devices

Nabi, Syed Waqar January 2009 (has links)
DRMP, a Dynamically Reconfigurable MAC Processor, is an innovative, dynamically reconfigurable System-on-Chip architecture. The architecture exploits substantial overlaps in the functionality of different wireless MAC layers. Its flexibility is specialized for addressing the requirements of the MAC layer of wireless standards. It is targeted at consumer, multi-standard, handheld devices, and its design is meant to address the balance of flexibility and power-efficiency that this target market demands. The DRMP reconfigures packet-by-packet on the fly, allowing execution of concurrent protocol modes on a single hardware co-processor. An interrupt-driven programming model has also been presented and shown to implement the protocol state-machine of the three protocols on a CPU. These features will allow the DRMP to replace three MAC processors in a hand-held device. The most innovative component of the DRMP architecture is its Interface and Reconfiguration Controller. It uses a combination of asynchronous controllers to dynamically reconfigure the functional units in the architecture and delegate MAC tasks to them. The architecture has been modeled in Simulink at cycle-approximate abstraction. Results of simulations involving transmission and reception of packets have been presented, showing that the platform concurrently handles three protocol streams, reconfigures dynamically, yet meets and exceeds the protocol timing constraints, all at a moderate frequency. Its heterogeneous and coarse-grained functional units, limited connectivity requirements between these units, and proportionally large time that these resources are idle, promise a very modest power-consumption, suitable for mobile devices, while offering flexibility to implement different MAC protocols.
200

Validation and verification of the interconnection of hardware intellectual property blocks for FPGA-based packet processing systems

McKechnie, Paul Edward January 2010 (has links)
As networks become more versatile, the computational requirement for supporting additional functionality increases. The increasing demands of these networks can be met by Field Programmable Gate Arrays (FPGA), which are an increasingly popular technology for implementing packet processing systems. The fine-grained parallelism and density of these devices can be exploited to meet the computational requirements and implement complex systems on a single chip. However, the increasing complexity of FPGA-based systems makes them susceptible to errors and difficult to test and debug. To tackle the complexity of modern designs, system-level languages have been developed to provide abstractions suited to the domain of the target system. Unfortunately, the lack of formality in these languages can give rise to errors that are not caught until late in the design cycle. This thesis presents three techniques for verifying and validating FPGA-based packet processing systems described in a system-level description language. First, a type system is applied to the system description language to detect errors before implementation. Second, system-level transaction monitoring is used to observe high-level events on-chip following implementation. Third, the high-level information embodied in the system description language is exploited to allow the system to be automatically instrumented for on-chip monitoring. This thesis demonstrates that these techniques catch errors which are undetected by traditional verification and validation tools. The locations of faults are specified and errors are caught earlier in the design flow, which saves time by reducing synthesis iterations.

Page generated in 0.0694 seconds