• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 144
  • 111
  • 1
  • Tagged with
  • 3005
  • 341
  • 337
  • 263
  • 237
  • 208
  • 199
  • 181
  • 180
  • 151
  • 144
  • 121
  • 118
  • 112
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Privacy and practicality of identity management systems

Alrodhan, Waleed A. January 2010 (has links)
No description available.
212

Analyzing and developing role-based access control models

Chen, Liang January 2011 (has links)
Role-based access control (RBAC) has become today's dominant access control model, and many of its theoretical and practical aspects are well understood. However, certain aspects of more advanced RBAC models, such as the relationship between permission usage and role activation and the interaction between inheritance and constraints, remain poorly understood. Moreover, the computational complexity of some important problems in RBAC remains unknown. In this thesis we consider these issues, develop new RBAC models and answer a number of these questions. We develop an extended RBAC model that proposes an alternative way to distinguish between activation and usage hierarchies. Our extended RBAC model has well-defined semantics, derived from a graph-based interpretation of RBAC state. Pervasive computing environments have created a requirement for access control systems in which authorization is dependent on spatio-temporal constraints. We develop a family of simple, expressive and flexible spatio-temporal RBAC models, and extend these models to include activation and usage hierarchies. Unlike existing work, our models address the interaction between spatio-temporal constraints and inheritance in RBAC, and are consistent and compatible with the ANSI RBAC standard. A number of interesting problems have been defined and studied in the context of RBAC recently. We explore some variations on the set cover problem and use these variations to establish the computational complexity of these problems. Most importantly, we prove that the minimal cover problem -- a generalization of the set cover problem -- is NP-hard. The minimal cover problem is then used to determine the complexity of the inter-domain role mapping problem and the user authorization query problem in RBAC. We also design a number of efficient heuristic algorithms to answer the minimal cover problem, and conduct experiments to evaluate the quality of these algorithms.
213

On plaintext-aware public-key encryption schemes

Birkett, James January 2010 (has links)
Plaintext awareness is a property of a public-key encryption scheme intended to capture the idea that the only way to produce a valid ciphertext is to take a message and encrypt it. The idea is compelling, but the devil, as always, is in the details. The established definition of plaintext awareness in the standard model is known as PA2 plaintext awareness and was introduced by Bellare and Palacio. We propose a modified definition of plaintext awareness, which we call 2PA2, in which the arbitrary stateful plaintext creators of the PA2 definition are replaced with a choice of two fixed stateless plaintext creators. We show that under reasonable conditions our new definition is equivalent to the standard one. We also adapt techniques used by Teranishi and Ogata to show that no encryption scheme which allows arbitrarily long messages can be PA2 plaintext aware, a disadvantage which our new definition does not appear to share. Dent has shown that a variant of the Cramer-Shoup encryption scheme based on the Diffie-Hellman problem is PA2 plaintext aware under the Diffie-Hellman Knowledge (DHK) assumption. We present a generalisation of this assumption to arbitrary subset membership problems, which we call the Sub- set Witness Knowledge (SWK) assumption, and use it to show that the generic Cramer-Shoup and Kurosawa-Desmedt encryption schemes based on hash proof systems are plaintext aware. In the case of the Diffie-Hellman problem, the SWK assumption is exactly the Diffie-Hellman Knowledge assumption, but we also discuss several other possible instantiations of this assumption.
214

How to meet the evolving situational awareness needs from airborne platforms

Tarter, Alex January 2010 (has links)
In order to operate safely civil aviation is increasingly reliant on the collection and provision of situational awareness. This situational awareness is fed to the pilot who uses it to know what is going on around them and minimise the risk of a dangerous situation occurring. Since their inception military commanders have long used unmanned aerial systems to provide situational awareness (namely imagery) of remote areas. This situational awareness information is transmit- ted back to them over military data-links so they can use it to make decisions, coordinate forces and plan strategies. However times are changing, or to be more specific the number and variety of decision-makers on the ground, who require situational awareness information generated on airborne platforms, are increasing. The September 11 th 2001 attacks using hijacked aircraft has meant that security is now playing a greater role in aviation alongside safety. Multiple decision-makers on the ground from political heads to air defence commanders now also want access to situational awareness information on the aircraft. Which means that in addition to the flow of safety related situational awareness information to the pilot there will need to be a whole new flow of security information from the aircraft to decision-makers on the ground. The same style of shift is occurring in the military UAS community as a result of implementing the twin doctrines of network centric operations and power to the edge. This means providing a greater amount of situational awareness to lower level decision-makers (soldiers in the field). Which means that instead of providing just one feed to a commander, the unmanned aerial system now has to supply imagery to multiple receivers all of whom could have different situational awareness needs. This thesis addresses those points and proposes using on-board processing systems for both platform types to create situational awareness information streams capable of simultaneously meeting the requirements of multiple decision-makers. This is accomplished with the use of fuzzy inference systems to turn raw sensor information into pieces of situational awareness that can be acted upon by decision-makers. These systems look for anomalous activity in passenger behaviour, which could indicate a security situation is occurring. It also proposes a method that allows decision-makers to tailor an imagery system to their needs rather than forcing decision- makers to use a one-size fits-all type of situational awareness provision system. The results of this thesis show that using historical patterns of behaviour and scenario generation airborne systems can be built to meet the new needs of multiple decision-makers on the ground. Techniques such as fuzzy inference systems can be tailored to perform the collection and processing of data into situational awareness information allowing it to be communicated over existing bandwidth limited connections. Therefore the overall hypothesis of this thesis is that there are evolving situational awareness needs that existing systems cannot meet and that through onboard situational awareness collec- tion and processing systems, ground-based decision-makers can obtain the situational awareness information they need even over the existing bandwidth limited communications channels.
215

The impact of SSC on high-latitude HF communications

Ritchie, Samuel Esteban January 2009 (has links)
No description available.
216

A novel framework for high-quality voice source analysis and synthesis

Turajlic, Emir January 2006 (has links)
The analysis, parameterization and modeling of voice source estimates obtained via inverse filtering of recorded speech are some of the most challenging areas of speech processing owing to the fact humans produce a wide range of voice source realizations and that the voice source estimates commonly contain artifacts due to the non-linear time-varying source-filter coupling. Currently, the most widely adopted representation of voice source signal is Liljencrants-Fant's (LF) model which was developed in late 1985. Due to the overly simplistic interpretation of voice source dynamics, LF model can not represent the fine temporal structure of glottal flow derivative realizations nor can it carry the sufficient spectral richness to facilitate a truly natural sounding speech synthesis. In this thesis we have introduced Characteristic Glottal Pulse Waveform Parameterization and Modeling (CGPWPM) which constitutes an entirely novel framework for voice source analysis, parameterization and reconstruction. In comparative evaluation of CGPWPM and LF model we have demonstrated that the proposed method is able to preserve higher levels of speaker dependant information from the voice source estimates and realize a more natural sounding speech synthesis. In general, we have shown that CGPWPM-based speech synthesis rates highly on the scale of absolute perceptual acceptability and that speech signals are faithfully reconstructed on consistent basis, across speakers, gender. We have applied CGPWPM to voice quality profiling and text-independent voice quality conversion method. The proposed voice conversion method is able to achieve the desired perceptual effects and the modified speech remained as natural sounding and intelligible as natural speech. In this thesis, we have also developed an optimal wavelet thresholding strategy for voice source signals which is able to suppress aspiration noise and still retain both the slow and the rapid variations in the voice source estimate.
217

Efficient FPGA implementation and power modelling of image and signal processing IP cores

Chandrasekaran, Shrutisagar January 2007 (has links)
Field Programmable Gate Arrays (FPGAs) are the technology of choice in a number ofimage and signal processing application areas such as consumer electronics, instrumentation, medical data processing and avionics due to their reasonable energy consumption, high performance, security, low design-turnaround time and reconfigurability. Low power FPGA devices are also emerging as competitive solutions for mobile and thermally constrained platforms. Most computationally intensive image and signal processing algorithms also consume a lot of power leading to a number of issues including reduced mobility, reliability concerns and increased design cost among others. Power dissipation has become one of the most important challenges, particularly for FPGAs. Addressing this problem requires optimisation and awareness at all levels in the design flow. The key achievements of the work presented in this thesis are summarised here. Behavioural level optimisation strategies have been used for implementing matrix product and inner product through the use of mathematical techniques such as Distributed Arithmetic (DA) and its variations including offset binary coding, sparse factorisation and novel vector level transformations. Applications to test the impact of these algorithmic and arithmetic transformations include the fast Hadamard/Walsh transforms and Gaussian mixture models. Complete design space exploration has been performed on these cores, and where appropriate, they have been shown to clearly outperform comparable existing implementations. At the architectural level, strategies such as parallelism, pipelining and systolisation have been successfully applied for the design and optimisation of a number of cores including colour space conversion, finite Radon transform, finite ridgelet transform and circular convolution. A pioneering study into the influence of supply voltage scaling for FPGA based designs, used in conjunction with performance enhancing strategies such as parallelism and pipelining has been performed. Initial results are very promising and indicated significant potential for future research in this area. A key contribution of this work includes the development of a novel high level power macromodelling technique for design space exploration and characterisation of custom IP cores for FPGAs, called Functional Level Power Analysis and Modelling (FLPAM). FLPAM is scalable, platform independent and compares favourably with existing approaches. A hybrid, top-down design flow paradigm integrating FLPAM with commercially available design tools for systematic optimisation of IP cores has also been developed.
218

Performance enhancements for single hop and multi-hop meshed high data rate wireless personal area networks

Mahmud, Sahibzada Ali January 2010 (has links)
The High Data Rate (HDR) Wireless Personal Area Networks (WPANs) typically have a limited operating range and are intended to support demanding multi-media applications at high data rates. In order to extend the communication range, HDR WPANs can operate in a wireless mesh configuration (i.e. enable multiple WPAN clusters) to communicate in a multi-hop fashion. HDR WPANs face several research challenges and some of the open key issues are limited capacity, optimum resource allocation to requesting devices and maintaining Quality of Service (QoS) for real time multimedia flows. Although, there have been some scheduling algorithms proposed for HDR WPANs, the main objective is to maintain the QoS in most cases whereas efficient and fair utilization of network capacity is still largely open for research. This thesis mainly intends to resolve the issues related to capacity of HDR WPANs such as admission control, fair allocation of Channel Time Allocations (CTAs), improvement in capacity through transmission power control, and efficient utilization of time by each flow. A technique which re-orders the time slots to reduce queuing delay for meshed WPANs is also proposed and evaluated. The first contribution aims to improve peer-to-peer connectivity in case of two or more independent piconet devices by proposing an inter-PAN communication framework that is augmented by an admission control strategy to handle the cases when the superframe capacity is congested. The queued devices are prioritized by proposing a parameter called the Rejection Ratio. The second contribution consists of a resource allocation framework for meshed WPANs. The main objectives are to reduce the control traffic due to high volume of channel time reservation requests and introduce an element of fairness in the channel time allocated to requesting devices. The objectives are achieved by using traffic prediction techniques and an estimated backoff procedure to reduce control traffic, and define different policies based on offered traffic for fair allocation of channel time. The centralized scheme uses traffic prediction techniques to use the proposed concept of bulk reservations. Based on the bulk reservations and resource allocation policies, the overall overhead is reduced while an element of fairness is shown to be maintained for certain scenarios. In the third contribution, the concepts of Time Efficiency and CTA switching are introduced to improve communication efficiency and utilization of superframe capacity in meshed WPANs. Two metrics known as Switched Time Slot (STS) and Switched Time Slot with Re-ordering (STS-R) are proposed which aim to achieve the purpose. The final contribution proposes and evaluates a technique called CTA overlappnig to improve capacity in single hop and meshed WPANs using tramission power control. Extensive simulation studies are performed to analyze and to evaluate the proposed techniques. Simulation results demonstrate significant improvements in meshed WPANs performance in terms of capacity utilization, improvement in fairness index for CTA allocation by upto 62% in some cases, reduction in control traffic overhead by upto 70% and reduction in delay for real time flows by more than 10% in some cases.
219

Investigation of high bandwith biodevices for transcutaneous wireless telemetry

Elamare, Gehad January 2010 (has links)
Biodevice implants for telemetry are increasingly applied today in various areas applications. There are many examples such as; telemedicine, biotelemetry, health care, treatments for chronic diseases, epilepsy and blindness, all of which are using a wireless infrastructure environment. They use microelectronics technology for diagnostics or monitoring signals such as Electroencephalography or Electromyography. Conceptually the biodevices are defined as one of these technologies combined with transcutaneous wireless implant telemetry (TWIT). A wireless inductive coupling link is a common way for transferring the RF power and data, to communicate between a reader and a battery-less implant. Demand for higher data rate for the acquisition data returned from the body is increasing, and requires an efficient modulator to achieve high transfer rate and low power consumption. In such applications, Quadrature Phase Shift Keying (QPSK) modulation has advantages over other schemes, and double the symbol rate with respect to Binary Phase Shift Keying (BPSK) over the same spectrum band. In contrast to analogue modulators for generating QPSK signals, where the circuit complexity and power dissipation are unsuitable for medical purposes, a digital approach has advantages. Eventually a simple design can be achieved by mixing the hardware and software to minimize size and power consumption for implantable telemetry applications. This work proposes a new approach to digital modulator techniques, applied to transcutaneous implantable telemetry applications; inherently increasing the data rate and simplifying the hardware design. A novel design for a QPSK VHDL modulator to convey a high data rate is demonstrated. Essentially, CPLD/FPGA technology is used to generate hardware from VHDL code, and implement the device which performs the modulation. This improves the data transmission rate between the reader and biodevice. This type of modulator provides digital synthesis and the flexibility to reconfigure and upgrade with the two most often languages used being VHDL and Verilog (IEEE Standard) being used as hardware structure description languages. The second objective of this thesis is to improve the wireless coupling power (WCP). An efficient power amplifier was developed and a new algorithm developed for auto-power control design at the reader unit, which monitors the implant device and keeps the device working within the safety regulation power limits (SAR). The proposed system design has also been modeled and simulated with MATLAB/Simulink to validate the modulator and examine the performance of the proposed modulator in relation to its specifications.
220

Quality of service for VoIP in wireless communications

Lopetegui Cincunegui, Iban January 2011 (has links)
Ever since telephone services were available to the public, technologies have evolved to more efficient methods of handling phone calls. Originally circuit switched networks were a breakthrough for voice services, but today most technologies have adopted packet switched networks, improving efficiency at a cost of Quality of Service (QoS). A good example of packet switched network is the Internet, a resource created to handle data over an Internet Protocol (IP) that can handle voice services, known as the Voice over the Internet Protocol (VoIP). The combination of wireless networks and free VoIP services is very popular, however its limitations in security and network overload are still a handicap for most practical applications. This thesis investigates network performance under VoIP sessions. The aim is to compare the performance of a variety of audio codecs that diminishes the impact of VoIP in the network. Therefore the contribution of this research is twofold: To study and analyse the extension of speech quality predictors by a new speech quality model to accurately estimate whether the network can handle a VoIP session or not and to implement a new application of network coding for VoIP to increase throughput. The analysis and study of speech quality predictors is based on the mathematical model developed by the E-model. A case study of an embedded Session Initiation Protocol (SIP) proxy, merged with a Media Gateway that bridges mobile networks to wired networks has been developed to understand its effects on QoS. Experimental speech quality measurements under wired and wireless scenarios were compared with the mathematical speech predictor resulting in an extended mathematical solution of the E-model. A new speech quality model for cascaded networks was designed and implemented out of this research. Provided that each channel is modelled by a Markov Chain packet loss model the methodology can predict expected speech quality and inform the QoS manager to take action. From a data rate perspective a VoIP session has a very specific characteristic; exchanged data between two end nodes is often symmetrical. This opens up a new opportunity for centralised VoIP sessions where network coding techniques can be applied to increase throughput performance at the channel. An application layer has been implemented based on network coding, fully compatible with existing protocols and successfully achieves the network capacity.

Page generated in 0.0419 seconds