• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

25 years of network access technologies : from voice to internet : the changing face of telecommunications

Bremner, Duncan James January 2015 (has links)
This work contributes to knowledge in the field of semiconductor system architectures, circuit design and implementation, and communications protocols. The work starts by describing the challenges of interfacing legacy analogue subscriber loops to an electronic circuit contained within the Central Office (Telephone Exchange) building. It then moves on to describe the globalisation of the telecom network, the demand for software programmable devices to enable system customisation cost effectively, and the creation of circuit and system blocks to realise this. The work culminates in the application challenges of developing a wireless RF front end, including antenna, for an Ultra Wideband communications systems applications. This thesis illustrates how higher levels of integration over the period of 1981 to 2010 have influenced the realisation of complex system level products, particularly analogue signal processing capabilities for communications applications. There have been many publications illustrating the impact of technology advancement from an economic or technology perspective. The thesis shows how technology advancement has impacted the physical realisation of semiconductor products over the period, at system, circuit, and physical implementation levels.
152

Diffeomorphic image registration with applications to deformation modelling between multiple data sets

Papiez, Bartlomiej Wladyslaw January 2012 (has links)
Over last years, the diffeomorphic image registration algorithms have been successfully introduced into the field of the medical image analysis. At the same time, the particular usability of these techniques, in majority derived from the solid mathematical background, has been only quantitatively explored for the limited applications such as longitudinal studies on treatment quality, or diseases progression. The thesis considers the deformable image registration algorithms, seeking out those that maintain the medical correctness of the estimated dense deformation fields in terms of the preservation of the object and its neighbourhood topology, offer the reasonable computational complexity to satisfy time restrictions coming from the potential applications, and are able to cope with low quality data typically encountered in Adaptive Radiotherapy (ART). The research has led to the main emphasis being laid on the diffeomorphic image registration to achieve one-to-one mapping between images. This involves introduction of the log-domain parameterisation of the deformation field by its approximation via a stationary velocity field. A quantitative and qualitative examination of existing and newly proposed algorithms for pairwise deformable image registration presented in this thesis, shows that the log-Euclidean parameterisation can be successfully utilised in the biomedical applications. Although algorithms utilising the log-domain parameterisation have theoretical justification for maintaining diffeomorphism, in general, the deformation fields produced by them have similar properties as these estimated by classical methods. Having this in mind, the best compromise in terms of the quality of the deformation fields has been found for the consistent image registration framework. The experimental results suggest also that the image registration with the symmetrical warping of the input images outperforms the classical approaches, and simultaneously can be easily introduced to most known algorithms. Furthermore, the log-domain implicit group-wise image registration is proposed. By linking the various sets of images related to the different subjects, the proposed image registration approach establishes a common subject space and between-subject correspondences therein. Although the correspondences between groups of images can be found by performing the classic image registration, the reference image selection (not required in the proposed implementation), may lead to a biased mean image being estimated and the corresponding common subject space not adequate to represent the general properties of the data sets. The approaches to diffeomorphic image registration have been also utilised as the principal elements for estimating the movements of the organs in the pelvic area based on the dense deformation field prediction system driven by the partial information coming from the specific type of the measurements parameterised using the implicit surface representation, and recognising facial expressions where the stationary velocity fields are used as the facial expression descriptors. Both applications have been extensively evaluated based on the real representative data sets of three-dimensional volumes and two-dimensional images, and the obtained results indicate the practical usability of the proposed techniques.
153

User-controlled Identity Management Systems using mobile devices

Ferdous, Md. Sadek January 2015 (has links)
Thousands of websites providing an array of diversified online services have been the crucial factor for popularising the Internet around the world during last 15 years. The current model of accessing the majority of those services requires users to register with a Service Provider - an administrative body that offers and provides online services. The registration procedure involves users providing a number of pieces of data about themselves which are then stored at the provider. This data provides a digital image of the user and is commonly known as the Identity of the user in that provider. To access different online services, users register at different providers and ultimately end up with a number of scattered identities which become increasingly difficult to manage. It is one of the major problems of the current setting of online services. What is even worse is that users have less control over the data stored in these providers and have no knowledge how their data is treated by providers. The concept of Identity Management has been introduced to help users facilitate the management of their identities in a user-friendly, secure and privacy-friendly way and thus, to tackle the stated problems. There exists a number of Identity Management models and systems, unfortunately, none of them has played a pivotal role in tackling the problems effectively and comprehensively. Simultaneously, we have experienced another trend expanding at a remarkable rate: the consumption and the usage of smart mobile devices. These mobile devices are not only growing in numbers but also in capability and capacity in terms of processing power and memory. Most are equipped with powerful hardware and highly-dynamic mobile operating systems offering touch-sensitive intuitive user-interfaces. In many ways, these mobile devices have become an integrated part of our day-to-day life and accompany us everywhere we go. The capability, portability and ubiquitous presence of such mobile devices lead to the core objective of this research: the investigation of how such mobile devices can be used to overcome the limitations of the current Identity Management Systems as well as to provide innovative online services. In short, this research investigates the need for a novel Identity Management System and the role the current generation of smart mobile devices can play in realising such a system. In this research it has been found that there exist different inconsistent notions of many central topics in Identity Management which are mostly defined in textual forms. To tackle this problem, a comprehensive mathematical model of Identity and Identity Management has been developed. The model has been used to analyse several phenomenons of Identity Management and to characterise different Identity Management models. Next, three popular Identity Management Systems have been compared using a taxonomy of requirements to identify the strength and weakness of each system. One of the major findings is that how different privacy requirements are satisfied in these systems is not standardised and depends on a specific implementation. Many systems even do not satisfy many of those requirements which can drastically affect the privacy of a user. To tackle the identified problems, the concept of a novel Identity Management System, called User-controlled Identity Management System, has been proposed. This system offers better privacy and allows users to exert more control over their data from a central location using a novel type of provider, called Portable Personal Identity Provider, hosted inside a smart mobile device of the user. It has been analysed how the proposed system can tackle the stated problems effectively and how it opens up new doors of opportunities for online services. In addition, it has been investigated how contextual information such as a location can be utilised to provide online services using the proposed provider. One problem in the existing Identity Management Systems is that providers cannot provide any contextual information such as the location of a user. Hosting a provider in a mobile device allows it to access different sensors of the device, retrieve contextual information from them and then to provide such information. A framework has been proposed to harness this capability in order to offer innovative services. Another major issue of the current Identity Management Systems is the lack of an effective mechanism to combine attributes from multiple providers. To overcome this problem, an architecture has been proposed and it has been discussed how this architecture can be utilised to offer innovative services. Furthermore, it has been analysed how the privacy of a user can be improved using the proposed provider while accessing such services. Realising these proposals require that several technical barriers are overcome. For each proposal, these barriers have been identified and addressed appropriately along with the respective proof of concept prototype implementation. These prototypes have been utilised to illustrate the applicability of the proposals using different use-cases. Furthermore, different functional, security and privacy requirements suitable for each proposal have been formulated and it has been analysed how the design choices and implementations have satisfied these requirements. Also, no discussion in Identity Management can be complete without analysing the underlying trust assumptions. Therefore, different trust issues have been explored in greater details throughout the thesis.
154

An empirical investigation into the effectiveness of a robot simulator as a tool to support the learning of introductory programming

Major, Louis January 2014 (has links)
Background: Robots have been used in the past as tools to aid the teaching of programming. There is limited evidence, however, about the effectiveness of simulated robots for this purpose. Aim: To investigate the effectiveness of a robot simulator, as a tool to support the learning of introductory programming, by undertaking empirical research involving a range of participants. Method: After the completion of a Systematic Literature Review, and exploratory research involving 33 participants, a multi-case case study was undertaken. A robot simulator was developed and it was subsequently used to run four 10-hour programming workshops. Participants included students aged 16 to 18 years old (n. 23) and trainee teachers (n. 23). Three in-service teachers (n. 3) also took part. Effectiveness was determined by considering participants’ opinions, attitudes and motivation using the simulator in addition to an analysis of the students’ programming performance. Pre- and post-questionnaires, in- and post-workshop programming exercises, interviews and observations were used to collect data. Results: Participants enjoyed learning using the simulator and believed the approach to be valuable and engaging. Whilst several factors must be taken into consideration, the programming performance of students indicates that the simulator aids learning as most completed tasks to a satisfactory standard. The majority of trainee teachers, who had learned programming beforehand, believed that the simulator offered a more effective means of introducing the subject compared to their previous experience. In-service teachers were of the opinion that a simulator offers a valuable means for supporting the teaching of programming. Conclusion: Evidence suggests that a robot simulator can offer an effective means of introducing programming concepts to novices. Recommendations and suggestions for future research are presented based on the lessons learned. It is intended that these will help to guide the development and use of robot simulators in order to teach programming.
155

Secure*BPMN : a graphical extension for BPMN 2.0 based on a reference model of information assurance & security

Cherdantseva, Yulia January 2014 (has links)
The main contribution of this thesis is Secure*BPMN, a graphical security modelling extension for the de-facto industry standard business process modelling language BPMN 2.0.1. Secure*BPMN enables a cognitively effective representation of security concerns in business process models. It facilitates the engagement of experts with different backgrounds, including non-security and nontechnical experts, in the discussion of security concerns and in security decision-making. The strength and novelty of Secure*BPMN lie in its comprehensive semantics based on a Reference Model of Information Assurance & Security (RMIAS) and in its cognitively effective syntax. The RMIAS, which was developed in this project, is a synthesis of the existing knowledge of the Information Assurance & Security domain. The RMIAS helps to build an agreed-upon understanding of Information Assurance & Security, which experts with different backgrounds require before they may proceed with the discussion of security issues. The development process of the RMIAS, which was made explicit, and the multiphase evaluation carried out confirmed the completeness and accuracy of the RMIAS, and its suitability as a foundation for the semantics of Secure*BPMN. The RMIAS, which has multiple implications for research, education and practice is a secondary contribution of this thesis, and is a contribution to the Information Assurance & Security domain in its own right. The syntax of Secure*BPMN complies with the BPMN extensibility rules and with the scientific principles of cognitively effective notation design. The analytical and empirical evaluations corroborated the ontological completeness, cognitive effectiveness, ease of use and usefulness of Secure*BPMN. It was verified that Secure*BPMN has a potential to be adopted in practice.
156

Semantic attack on transaction data anonymised by set-based generalisation

Ong, Hoang January 2015 (has links)
Publishing data that contains information about individuals may lead to privacy breaches. However, data publishing is useful to support research and analysis. Therefore, privacy protection in data publishing becomes important and has received much recent attention. To improve privacy protection, many researchers have investigated how secure the published data is by designing de-anonymisation methods to attack anonymised data. Most of the de-anonymisation methods consider anonymised data in a syntactic manner. That is, items in a dataset are considered to be contextless or even meaningless literals, and they have not considered the semantics of these data items. In this thesis, we investigate how secure the anonymised data is under attacks that use semantic information. More specifically, we propose a de-anonymisation method to attack transaction data anonymised by set-based generalisation. Set-based generalisation protects data by replacing one item by a set of items, so that the identity of an individual can be hidden. Our goal is to identify those items that are added to a transaction during generalisation. Our attacking method has two components: scoring and elimination. Scoring measures semantic relationship between items in a transaction, and elimination removes items that are deemed not to be in the original transaction. Our experiments on both real and synthetic data show that set-based generalisation may not provide adequate protection for transaction data, and about 70% of the items added to the transactions during generalisation can be detected by our method with a precision greater than 85%.
157

The automatic implementation of a dynamic load balancing strategy within structured mesh codes generated using a parallelisation tool

Rodrigues, Jacqueline Nadine January 2003 (has links)
This research demonstrates that the automatic implementation of a dynamic load balancing (DLB) strategy within a parallel SPMD (single program multiple data) structured mesh application code is possible. It details how DLB can be effectively employed to reduce the level of load imbalance in a parallel system without expert knowledge of the application. Furnishing CAPTools (the Computer Aided Parallelisation Tools) with the additional functionality of DLB, a DLB parallel version of the serial Fortran 77 application code can be generated quickly and easily with the press of a few buttons, allowing the user to obtain results on various platforms rather than concentrate on implementing a DLB strategy within their code. Results show that the devised DLB strategy has successfully decreased idle time by locally increasing/decreasing processor workloads as and when required to suit the parallel application, utilising the available resources efficiently. Several possible DLB strategies are examined with the understanding that it needs to be generic if it is to be automatically implemented within CAPTools and applied to a wide range of application codes. This research investigates the issues surrounding load imbalance, distinguishing between processor and physical imbalance in terms of the load redistribution of a parallel application executed on a homogeneous or heterogeneous system. Issues such as where to redistribute the workload, how often to redistribute, calculating and implementing the new distribution (deciding what data arrays to redistribute in the latter case), are all covered in detail, with many of these issues common to the automatic implementation of DLB for unstructured mesh application codes. The devised DLB Staggered Limit Strategy discussed in this thesis offers flexibility as well as ease of implementation whilst minimising changes to the user's code. The generic utilities developed for this research are discussed along with their manual implementation upon which the automation algorithms are based, where these utilities are interchangeable with alternative methods if desired. This thesis aims to encourage the use of the DLB Staggered Limit Strategy since its benefits are evidently significant and are now easily achievable with its automatic implementation using CAPTools.
158

Prognostics and health management of light emitting diodes

Sutharssan, Thamotharampillai January 2012 (has links)
Prognostics is an engineering process of diagnosing, predicting the remaining useful life and estimating the reliability of systems and products. Prognostics and Health Management (PHM) has emerged in the last decade as one of the most efficient approaches in failure prevention, reliability estimation and remaining useful life predictions of various engineering systems and products. Light Emitting Diodes (LEDs) are optoelectronic micro-devices that are now replacing traditional incandescent and fluorescent lighting, as they have many advantages including higher reliability, greater energy efficiency, long life time and faster switching speed. Even though LEDs have high reliability and long life time, manufacturers and lighting systems designers still need to assess the reliability of LED lighting systems and the failures in the LED. This research provides both experimental and theoretical results that demonstrate the use of prognostics and health monitoring techniques for high power LEDs subjected to harsh operating conditions. Data driven, model driven and fusion prognostics approaches are developed to monitor and identify LED failures, based on the requirement for the light output power. The approaches adopted in this work are validated and can be used to assess the life of an LED lighting system after their deployment based on the power of the light output emitted. The data driven techniques are only based on monitoring selected operational and performance indicators using sensors whereas the model driven technique is based on sensor data as well as on a developed empirical model. Fusion approach is also developed using the data driven and the model driven approaches to the LED. Real-time implementation of developed approaches are also investigated and discussed.
159

The usability of knowledge based authentication methods on mobile devices

Rooney, James January 2013 (has links)
Mobile devices are providing ever increasing functionality to users, and the risks associated with applications storing personal details are high. Graphical authentication methods have been shown to provide better security in terms of password space than traditional approaches, as well as being more memorable. The usability of any system is important since an unusable system will often be avoided. This thesis aims to investigate graphical authentication methods based on recall, cued recall and recognition memory in terms of their usability and security.
160

Using local and global knowledge in wireless sensor networks

Gwilliams, Christopher January 2015 (has links)
Wireless sensor networks (WSNs) have advanced rapidly in recent years and the volume of raw data received at an endpoint can be huge. We believe that the use of local knowledge, acquired from sources such as the surrounding environment, users and previously sensed data, can improve the efficiency of a WSN and automate the classification of sensed data. We define local knowledge as knowledge about an area that has been gained through experience or experimentation. With this in mind, we have developed a three-tiered architecture for WSNs that uses differing knowledge-processing capabilities at each tier, called the Knowledge-based Hierarchical Architecture for Sensing (KHAS). A novel aligning ontology has been created to support K-HAS, joining widely used, domain-specific ontologies from the sensing and observation domains. We have shown that, as knowledge-processing capabilities are pushed further out into the network, the profit - defined as the value of sensed data - is increased; where the profit is defined as the value of the sensed data received by the end user. Collaborating with Cardiff University School of Biosciences, we have deployed a variation of K-HAS in the Malaysian rainforest to capture images of endangered wildlife, as well as to automate the collection and classification of these images. Technological limitations prevented a complete implementation of K-HAS and an amalgamation of tiers was made to create the Local knowledge Ontology-based Remote-sensing Informatics System (LORIS). A two week deployment in Malaysia suggested that the architecture was viable and that, even using local knowledge at the endpoint of a WSN, improved the efficiency of the network. A simulation was implemented to model K-HAS and this indicated that the network became more efficient as knowledge was pushed further out towards the edge, by allowing nodes to prioritise sensed data based on inferences about its content.

Page generated in 0.0934 seconds