• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5890
  • 1152
  • 723
  • 337
  • 65
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8727
  • 8727
  • 7874
  • 7232
  • 3980
  • 3979
  • 3412
  • 3334
  • 3230
  • 3230
  • 3230
  • 3230
  • 3230
  • 1164
  • 1157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Physical layer security in wireless networks : design and enhancement

Wang, Lifeng January 2015 (has links)
Security and privacy have become increasingly significant concerns in wireless communication networks, due to the open nature of the wireless medium which makes the wireless transmission vulnerable to eavesdropping and inimical attacking. The emergence and development of decentralized and ad-hoc wireless networks pose great challenges to the implementation of higher-layer key distribution and management in practice. Against this background, physical layer security has emerged as an attractive approach for performing secure transmission in a low complexity manner. This thesis concentrates on physical layer security design and enhancement in wireless networks. First, this thesis presents a new unifying framework to analyze the average secrecy capacity and secrecy outage probability. Besides the exact average secrecy capacity and secrecy outage probability, a new approach for analyzing the asymptotic behavior is proposed to compute key performance parameters such as high signal-to-noise ratio slope, power offset, secrecy diversity order, and secrecy array gain. Typical fading environments such as two-wave with diffuse power and Nakagami-m are taken into account. Second, an analytical framework of using antenna selection schemes to achieve secrecy is provided. In particular, transmit antenna selection and generalized selection combining are considered including its special cases of selection combining and maximal-ratio combining. Third, the fundamental questions surrounding the joint impact of power constraints on the cognitive wiretap channel are addressed. Important design insights are revealed regarding the interplay between two power constraints, namely the maximum transmit at the secondary network and the peak interference power at the primary network. Fourth, secure single carrier transmission is considered in the two-hop decode-andi forward relay networks. A two-stage relay and destination selection is proposed to minimize the eavesdropping and maximize the signal power of the link between the relay and the destination. In two-hop amplify-and-forward untrusted relay networks, secrecy may not be guaranteed even in the absence of external eavesdroppers. As such, cooperative jamming with optimal power allocation is proposed to achieve non-zero secrecy rate. Fifth and last, physical layer security in large-scale wireless sensor networks is introduced. A stochastic geometry approach is adopted to model the positions of sensors, access points, sinks, and eavesdroppers. Two scenarios are considered: i) the active sensors transmit their sensing data to the access points, and ii) the active access points forward the data to the sinks. Important insights are concluded.
272

Applying Bayesian networks to model uncertainty in project scheduling

Khodakarami, Vahid January 2009 (has links)
Risk Management has become an important part of Project Management. In spite of numerous advances in the field of Project Risk Management (PRM), handling uncertainty in complex projects still remains a challenge. An important component of Project Risk Management (PRM) is risk analysis, which attempts to measure risk and its impact on different project parameters such as time, cost and quality. By highlighting the trade-off between project parameters, the thesis concentrates on project time management under uncertainty. The earliest research incorporating uncertainty/risk in projects started in the late 1950’s. Since then, several techniques and tools have been introduced, and many of them are widely used and applied throughout different industries. However, they often fail to capture uncertainty properly and produce inaccurate, inconsistent and unreliable results. This is evident from consistent problems of cost and schedule overrun. The thesis will argue that the simulation-based techniques, as the dominant and state-of-the-art approach for modelling uncertainty in projects, suffers from serious shortcomings. More advanced techniques are required. Bayesian Networks (BNs), are a powerful technique for decision support under uncertainty that have attracted a lot of attention in different fields. However, applying BNs in project risk management is novel. The thesis aims to show that BN modelling can improve project risk assessment. A literature review explores the important limitations of the current practice of project scheduling under uncertainty. A new model is proposed which applies BNs for performing the famous Critical Path Method (CPM) calculation. The model subsumes the benefits of CPM while adding BN capability to properly capture different aspects of uncertainty in project scheduling.
273

Network-provider-independent overlays for resilience and quality of service

Zhang, Xian January 2011 (has links)
Overlay networks are viewed as one of the solutions addressing the inefficiency and slow evolution of the Internet and have been the subject of significant research. Most existing overlays providing resilience and/or Quality of Service (QoS) need cooperation among different network providers, but an inter-trust issue arises and cannot be easily solved. In this thesis, we mainly focus on network-provider-independent overlays and investigate their performance in providing two different types of service. Specifically, this thesis addresses the following problems: Provider-independent overlay architecture: A provider-independent overlay framework named Resilient Overlay for Mission-Critical Applications (ROMCA) is proposed. We elaborate its structure including component composition and functions and also provide several operational examples. Overlay topology construction for providing resilience service: We investigate the topology design problem of provider-independent overlays aiming to provide resilience service. To be more specific, based on the ROMCA framework, we formulate this problem mathematically and prove its NP-hardness. Three heuristics are proposed and extensive simulations are carried out to verify their effectiveness. Application mapping with resilience and QoS guarantees: Assuming application mapping is the targeted service for ROMCA, we formulate this problem as an Integer Linear Program (ILP). Moreover, a simple but effective heuristic is proposed to address this issue in a time-efficient manner. Simulations with both synthetic and real networks prove the superiority of both solutions over existing ones. Substrate topology information availability and the impact of its accuracy on overlay performance: Based on our survey that summarizes the methodologies available for inferring the selective substrate topology formed among a group of nodes through active probing, we find that such information is usually inaccurate and additional mechanisms are needed to secure a better inferred topology. Therefore, we examine the impact of inferred substrate topology accuracy on overlay performance given only inferred substrate topology information.
274

Integration of TV white space and femtocell networks

Peng, Fei January 2013 (has links)
Femtocell is an effective approach to increase system capacity in cellular networks. Since traditional Femtocells use the same frequency band as the cellular network, cross-tier and co-tier interference exist in such Femtocell networks and have a major impact on deteriorating the system throughput. In order to tackle these challenges, interference mitigation has drawn attentions from both academia and industry. TV White Space (TVWS) is a newly opened portion of spectrum, which comes from the spare spectrum created by the transition from analogue TV to digital TV. It can be utilized by using cognitive radio technology according to the policies from telecommunications regulators. This thesis considers using locally available TVWS to reduce the interference in Femtocell networks. The objective of this research is to mitigate the downlink cross-tier and co-tier interference in different Femtocell deployment scenarios, and increase the throughput of the overall system. A Geo-location database model to obtain locally available TVWS information in UK is developed in this research. The database is designed using power control method to calculate available TVWS channels and maximum allowable transmit power based on digital TV transmitter information in UK and regulations on unlicensed use of TVWS. The proposed database model is firstly combined with a grid-based resource allocation scheme and investigated in a simplified Femtocell network to demonstrate the gains of using TVWS in Femtocell networks. Furthermore, two Femtocell deployment scenarios are studied in this research. In the suburban Femtocell deployment scenario, a novel system architecture that consists of the Geo-location database and a resource allocation scheme using TVWS is proposed to mitigate cross-tier interference between Macrocell and Femtocells. In the dense Femtocell deployment scenario, a power efficient resource allocation scheme is proposed to maximize the throughput of Femtocells while limiting the co-tier interference among Femtocells. The optimization problem in the power efficient scheme is solved by using sequential quadratic programming method. The simulation results show that the proposed schemes can effectively mitigate the interference in Femtocell networks in practical deployment scenarios.
275

Traffic pattern prediction in cellular networks

Zhang, Kejing January 2011 (has links)
Increasing numbers of users together with a more use of high bit-rate services complicate radio resource management in 3G systems. In order to improve the system capacity and guarantee the QoS, a large amount of research had been carried out on radio resource management. One viable approach reported is to use semi-smart antennas to dynamically change the radiation pattern of target cells to reduce congestion. One key factor of the semi-smart antenna techniques is the algorithm to adjust the beam pattern to cooperatively control the size and shape of each radio cell. Methods described in the literature determine the optimum radiation patterns according to the current observed congestion. By using machine learning methods, it is possible to detect the upcoming change of the traffic patterns at an early stage and then carry out beamforming optimization to alleviate the reduction in network performance. Inspired from the research carried out in the vehicle mobility prediction field, this work learns the movement patterns of mobile users with three different learning models by analysing the movement patterns captured locally. Three different mobility models are introduced to mimic the real-life movement of mobile users and provide analysable data for learning. The simulation results shows that the error rates of predictions on the geographic distribution of mobile users are low and it is feasible to use the proposed learning models to predict future traffic patterns. Being able to predict these patterns mean that the optimized beam patterns could be calculated according to the predicted traffic patterns and loaded to the relevant base stations in advance.
276

Planning simulation run length in packet queues in communications networks

Xu, Ling January 2013 (has links)
Simulation is a technique of growing importance and is becoming an indispensable tool applied in various academic industries, including packet networks. Simulation provides an alternative research approach to implementing a real environment, owing to its features of scalability, exibility and ease of setup. However, simulating large-scale networks can be very time and resource consuming. It can take several days to run one long simulation experiment, which may be expensive or even unaffordable. Therefore, planning simulation is important. This research proposes to plan simulation run length through predicting the required shortest run length that approximates steady-state, in the form of mathematical and logical expressions, i.e. building an analytical model. Previously related research always focused on classical models, such as the M/M/1 queue model, M/G/1 queue model, and so on. This research expands the research base to include a packet multiplexing model of homogenous sources which is widely accepted and used. This thesis investigates different traffic types (Markovian/Pareto) and different QoS parameter (delay/losses), as well as applying them to end-to-end networks. These scenarios are analysed and expressed, in terms of different desired precision level. Final results show that run length time is well predicted using the developed analytical model, which can be a guide for simulation planning in packet networks of the present and the future. This can be of great significance for performance evaluation studies.
277

A trust framework for peer-to-peer interaction in ad hoc networks

Boodnah, Javesh January 2010 (has links)
As a wider public is increasingly adopting mobile devices with diverse applications, the idea of who to trust while on the move becomes a crucial one. The need to find dependable partners to interact is further exacerbated in situations where one finds oneself out of the range of backbone structures such as wireless base stations or cellular networks. One solution is to generate self-started networks, a variant of which is the ad hoc network that promotes peer-to-peer networking. The work in this thesis is aimed at defining a framework for such an ad hoc network that provides ways for participants to distinguish and collaborate with their most trustworthy neighbours. In this framework, entities create the ability to generate trust information by directly observing the behaviour of their peers. Such trust information is also shared in order to assist those entities in situations where prior interactions with their target peers may not have existed. The key novelty points of the framework focus on aggregating the trust evaluation process around the most trustworthy nodes thereby creating a hierarchy of nodes that are distinguished by the class, defined by cluster heads, to which they belong. Furthermore, the impact of such a framework in generating additional overheads for the network is minimised through the use of clusters. By design, the framework also houses a rule-based mechanism to thwart misbehaving behaviour or non-cooperation. Key performance indicators are also defined within this work that allow a framework to be quickly analysed through snapshot data, a concept analogous to those used within financial circles when assessing companies. This is also a novel point that may provide the basis for directly comparing models with different underlying technologies. The end result is a trust framework that fully meets the basic requirements for a sustainable model of trust that can be developed onto an ad hoc network and that provides enhancements in efficiency (using clustering) and trust performance.
278

Subspace discovery for video anomaly detection

Tziakos, Ioannis January 2010 (has links)
In automated video surveillance anomaly detection is a challenging task. We address this task as a novelty detection problem where pattern description is limited and labelling information is available only for a small sample of normal instances. Classification under these conditions is prone to over-fitting. The contribution of this work is to propose a novel video abnormality detection method that does not need object detection and tracking. The method is based on subspace learning to discover a subspace where abnormality detection is easier to perform, without the need of detailed annotation and description of these patterns. The problem is formulated as one-class classification utilising a low dimensional subspace, where a novelty classifier is used to learn normal actions automatically and then to detect abnormal actions from low-level features extracted from a region of interest. The subspace is discovered (using both labelled and unlabelled data) by a locality preserving graph-based algorithm that utilises the Graph Laplacian of a specially designed parameter-less nearest neighbour graph. The methodology compares favourably with alternative subspace learning algorithms (both linear and non-linear) and direct one-class classification schemes commonly used for off-line abnormality detection in synthetic and real data. Based on these findings, the framework is extended to on-line abnormality detection in video sequences, utilising multiple independent detectors deployed over the image frame to learn the local normal patterns and infer abnormality for the complete scene. The method is compared with an alternative linear method to establish advantages and limitations in on-line abnormality detection scenarios. Analysis shows that the alternative approach is better suited for cases where the subspace learning is restricted on the labelled samples, while in the presence of additional unlabelled data the proposed approach using graph-based subspace learning is more appropriate.
279

A game-based approach towards human augmented image annotation

Seneviratne, Attgalage Lasantha Gunathilaka January 2011 (has links)
Image annotation is a difficult task to achieve in an automated way. In this thesis, a human-augmented approach to tackle this problem is discussed and suitable strategies are derived to solve it. The proposed technique is inspired by human-based computation in what is called “human-augmented” processing to overcome limitations of fully automated technology for closing the semantic gap. The approach aims to exploit what millions of individual gamers are keen to do, i.e. enjoy computer games, while annotating media. In this thesis, the image annotation problem is tackled by a game based framework. This approach combines image processing and a game theoretic model to gather media annotations. Although the proposed model behaves similar to a single player game model, the underlying approach has been designed based on a two-player model which exploits the player’s contribution to the game and previously recorded players to improve annotations accuracy. In addition, the proposed framework is designed to predict the player’s intention through Markovian and Sequential Sampling inferences in order to detect cheating and improve annotation performances. Finally, the proposed techniques are comprehensively evaluated with three different image datasets and selected representative results are reported.
280

Information-theoretic measures of predictability for music content analysis

Foster, Peter January 2014 (has links)
This thesis is concerned with determining similarity in musical audio, for the purpose of applications in music content analysis. With the aim of determining similarity, we consider the problem of representing temporal structure in music. To represent temporal structure, we propose to compute information-theoretic measures of predictability in sequences. We apply our measures to track-wise representations obtained from musical audio; thereafter we consider the obtained measures predictors of musical similarity. We demonstrate that our approach benefits music content analysis tasks based on musical similarity. For the intermediate-specificity task of cover song identification, we compare contrasting discrete-valued and continuous-valued measures of pairwise predictability between sequences. In the discrete case, we devise a method for computing the normalised compression distance (NCD) which accounts for correlation between sequences. We observe that our measure improves average performance over NCD, for sequential compression algorithms. In the continuous case, we propose to compute information-based measures as statistics of the prediction error between sequences. Evaluated using 300 Jazz standards and using the Million Song Dataset, we observe that continuous-valued approaches outperform discrete-valued approaches. Further, we demonstrate that continuous-valued measures of predictability may be combined to improve performance with respect to baseline approaches. Using a filter-and-refine approach, we demonstrate state-of-the-art performance using the Million Song Dataset. For the low-specificity tasks of similarity rating prediction and song year prediction, we propose descriptors based on computing track-wise compression rates of quantised audio features, using multiple temporal resolutions and quantisation granularities. We evaluate our descriptors using a dataset of 15 500 track excerpts of Western popular music, for which we have 7 800 web-sourced pairwise similarity ratings. Combined with bag-of-features descriptors, we obtain performance gains of 31.1% and 10.9% for similarity rating prediction and song year prediction. For both tasks, analysis of selected descriptors reveals that representing features at multiple time scales benefits prediction accuracy.

Page generated in 0.1282 seconds