• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 127
  • 28
  • 21
  • 7
  • 1
  • Tagged with
  • 855
  • 334
  • 323
  • 318
  • 317
  • 317
  • 317
  • 313
  • 313
  • 312
  • 311
  • 311
  • 311
  • 311
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modelling and reasoning about trust relationships in the development of trustworthy information systems

Pavlidis, Michail January 2014 (has links)
Trustworthy information systems are information systems that fulfill all the functional and non-functional requirements. To this end, all the components of an information system, either human or technical, need to collaborate in order to meet its requirements and achieve its goals. This entails that system components will show the desired or expected behaviour once the system is put in operation. However, modern information systems include a great number of components that can behave in a very unpredictable way. This unpredictability of the behaviour of the system components is a major challenge to the development of trustworthy information systems and more particularly during the modelling stage. When a system component is modelled as part of a requirements engineering model it creates an uncertainty about its future behaviour, thus undermining the accuracy of the system model and eventually the system trustworthiness. Therefore, the addition of system components inevitably is based on assumptions of their future behaviour. Such assumptions are underlying the development of a system and usually are assumptions of trust by the system developer about her trust relationships with the system components, which are instantly formed when a component is inserted into a requirements engineering model of a system. However, despite the importance of such issues, a requirements engineering methodology that explicitly captures such trust relationships along with the entailing trust assumptions and trustworthiness requirements is still missing. For tackling the preceding problems, the thesis proposes a requirements engineering methodology, namely JTrust (Justifying Trust) for developing trustworthy information systems. The methodology is founded upon the notions of trust and control as the means of confidence achievement. In order to develop an information system the developer needs to consider her trust relationships with the system components that are formed with their addition in a system model, reason about them, and proceed to a justified decision about the design of the system. If the system component cannot be trusted to behave in a desired or expected way then the question of what are the alternatives in order to build confidence in the future behaviour of the system component raises. To answer this question we define a new class of requirements, namely trustworthiness requirements. Trustworthiness requirements prescribe the functionality of the software included in the information system that compels the rest of the information system components to behave in a desired or expected way. The proposed methodology consists of: (i) a modelling language which contains trust i and control abstractions; (ii) and a methodological process for capturing and reasoning about trust relationships, modelling and analysing trustworthiness requirements, and assessing the system trustworthiness at a requirements stage. The methodology is accompanied by a CASE tool to support it. To evaluate our proposal, we have applied our methodology to a case study, and we carried out a survey to get feedback from experts. The topic of the case study was the e-health care system of the National Health Service in England, which was used to reason about trust relationships with system components and identify trustworthiness requirements. Researchers from three academic institutions across Europe and from one industrial company, British Telecom, have participated in the survey in order to provide valuable feedback about the effectiveness and efficiency of the methodology. The results conclude that JTrust is useful and easy to use in modelling and reasoning about trust relationships, modelling and analysing trustworthiness requirements and assessing the system trustworthiness at a requirements level.
32

Real time and performance management techniques in SSD storage systems

Komsul, Muhammed Ziya January 2017 (has links)
Flash-based storage systems offer high density, robustness, and reliability for embedded applications; however the physical nature of flash memory means that there are limitations to its usage in high reliability applications. To increase the reliability of flash-based storage systems, several RAID mechanisms have been proposed. However, these mechanisms permit the recovery of data onto a new replacement device when a particular device in the array reaches its endurance limit and they need regular garbage collection to efficiently manage free resources. These present concerns with response time as when a garbage collector or a device replacement is underway, the flash memory cannot be used by the application layer for an uncertain period of time. This non-determinism in terms of response time is problematic in high reliability systems that require real-time guarantees. Existing solutions to garbage collection only consider single flash chip but ignore architectures where multiple flash memories are used in a storage system such as RAID. Traditional replacement mechanisms based on magnetic storage mediums do not suit specifications of flash memory. The aim of this thesis is to improve the reliability of the SSD RAID mechanisms by providing guaranteed access time for hard real-time embedded applications. Investigating the hypothesis, a number of novel mechanisms were proposed with the goal of enhancing data reliability in an SSD array. Two novel mechanisms solve the non-determinism problem caused by garbage collection without disturbing the reliability mechanism unlike existing techniques. The third mechanism is device replacement techniques for replacing elements in the array, increasing system dependability by providing continuous system availability with higher I/O performance for hard real-time embedded applications. A global flash translation layer with novel garbage collection mechanisms, on-line device replacement techniques, and their associated controllers are implemented on our FPGA SSD RAID controller. Contrary to traditional approaches, a dynamic preemptive cleaning mechanism adopts a dynamic cleaning feature which does not disturb the reliability mechanism. In addition to this the garbage collection aware RAID mechanism is introduced to improve the maximum response time of the system further. On-line device replacement techniques address limitations of the device replacement and thus provide more deterministic response times. The reliability, real-time and performance of these mechanisms via trace-driven simulator for number of synthetic and realistic traces are also evaluated. The contribution of this thesis is as follows: the presentation of novel mechanisms that enable the real-time support for RAID techniques in SSD devices, the development of a number of mechanisms that enhance the performance and reliability of flash-based storages, the implementation of these controllers, and the provision of a complete test bed for investigating these behaviours.
33

Visual data association : tracking, re-identification and retrieval

Zheng, Feng January 2016 (has links)
As there is a rapid development of the information society, large amounts of multimedia data are generated, which are shared and transferred on various electronic devices and the Internet every minute. Hence, building intelligent systems capable of associating these visual data at diverse locations and different times is absolutely essential and will significantly facilitate understanding and identifying where an object came from and where it is going. Thus, the estimated traces of motions or changes increasingly make it feasible to implement advanced algorithms to real-world applications, including human-computer interaction, robotic navigation, security in surveillance, biological characteristics association and civil structure vibration detection. However, due to the inherent challenges, such as ambiguity, heterogeneity, noisy data, large-scale property and unknown variations, visual data association is currently far from being established. Therefore, this thesis focuses on the studies of associating visual data at diverse locations and different times for the tasks of tracking, re-identification and retrieval. More specifically, three situations including single camera, across multiple cameras and across multiple modalities have been investigated and four algorithms have been developed at different levels. Chapter 3 The first algorithm is to explore an ensemble system for robust object tracking, primarily considering the independence of classifier members. An empirical analysis is firstly given to show that object tracking is a non-i.i.d. sampling, under-sample and incomplete-dataset problem. Then, a set of independent classifiers trained sequentially on different small datasets is dynamically maintained to overcome the particular machine learning problem. Thus, for every challenge, an optimal classifier can be approximated in a subspace spanned by the selected competitive classifiers. Chapter 4 The second method is to improve the object tracking by exploiting a winner-take-all strategy to select the most suitable trackers. This topic naturally extends the concept of ensemble in the first topic to a more general idea: a multi-expert system, in which members come from different function spaces. Thus, the diversity of the system is more likely to be amplified. Based on a large public dataset, a prediction model of performance for different trackers on various challenges can be obtained off-line. Then, the learned structural regression model can be directly used to efficiently select the winner tracker online. Chapter 5 The third one is to learn cross-view identities for fast person re-identification, in a cross-camera setting, which significantly differs from the single-view object tracking in the first two topics. Two sets of discriminative hash functions for two different views are learned by simultaneously minimising their distance in the Hamming space, and maximising the cross-covariance and margin. Thus, similar binary codes can be found for images of the same person captured at different views by embedding the images into the Hamming space. Chapter 6 The fourth model is to develop a novel Hetero-manifold regularisation framework for efficient cross-modal retrieval. Compared with the first two settings, this is a more general and complex topic, in which the samples can be relaxed to the images captured in the very far distance or very long time, even to text, voice and other formats. Taking advantage of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. It is concluded that, by fully exploiting the algorithms for solving the problems in the three situations, an integrated trace for an object moving anywhere can be definitely discovered.
34

Behavioural synthesis of run-time reconfigurable systems

Esrafili-Gerdeh, Donald January 2016 (has links)
MOODS (Multiple Objective Optimisation in Data and control path Synthesis) is a Behavioural Synthesis System which can automatically generate a number of structural descriptions of a digital circuit from just a single behavioural one. Although each structural description is functionally equivalent to the next, it will have different properties, such as circuit area or delay. The final structural description selected will be the one which best meets the user's optimisation goals and constraints. Run-time Reconfigurable systems operate through multiple configurations of the programmable hardware on which they are implemented, dynamically allocating resources 'on the fly' during their execution. The partially reconfigurable devices upon which they are based, enable area of their configuration memory to be rewritten, without disturbing the operation of existing configurations - unless so desired. This characteristic may be exploited by partitioning a circuit into a number of distinct temporal contexts, which when ultimately realised as device-level configurations may be swapped in and out the device's configuration memory, as the run-time operation of the circuit dictates. At any point during the execution of the temporally partitioned circuit, the area required to implement it is equal to the size of the largest context and not the sum of each of its constituent parts, as would be the case in a non-reconfigurable implementation. The reduction in circuit area comes at the cost of a reconfiguration overhead, the time taken to partially reconfigure the device with each configuration and the frequency at which this form of context switching occurs. This Thesis describes an extension to the original MOODS system, enabling it to quantify the trade-off that exists between the potential area reduction offered through run-time reconfiguration and the subsequent reconfiguration overhead incurred as a result. In addition to performing the temporal partitioning alongside existing circuit optimisation, MOODS is now able to automatically generate the infrastructure to support a practical implantation of the temporal contexts on a commercial Field Programmable Gate Array.
35

Hierarchical strategies for fault-tolerance in reconfigurable architectures

Lawson, David January 2015 (has links)
This thesis presents a novel hierarchical fault-tolerance methodology for fault recovery in reconfigurable devices. As the semiconductor industry moves to producing ever smaller transistors, the number of faults occurring increases. At current technology nodes, unavoidable variations in production cause transistor devices to perform outside of ideal ranges. This variability manifests as faults at higher levels and has a knock-on effect for yields. In some ways, fault tolerance has never been more important. To better explore the area of variability, a novel reconfigurable architecture was designed: Programmable Analogue and Digital Array (PAnDA). By allowing reconfiguration from the transistor level to the logic block level, PAnDA allows for design space exploration, previously only available through simulation, in hardware. The main advantage of this is that design modifications can be tested almost instantaneously, as opposed to running time consuming transistor-level simulations. As a result of this design, each level of PAnDA’s configuration contains structural homogeneity, allowing multiple implementations of the same circuit on the same hardware. This potentially creates opportunities for fault tolerance through reconfiguration, and so experimental work is performed to discover how best to utilise these properties of PAnDA. The findings show that it is possible to optimise the reconfiguration in the event of a fault, even if the nature and location of the fault are unknown.
36

A racetrack memory based on exchange bias

Polenciuc, Ioan January 2016 (has links)
This thesis describes preliminary studies for a new type of computer memory, racetrack memory. Racetrack memory was initially proposed by scientists at IBM. Data in racetrack memory is stored in domains within ferromagnetic nanowires which are separated by domain walls. The data is moved in the wires by moving the domain walls. Control over the movement of domain walls was initially attempted via use of notches cut into the wires, but these were not only expensive and difficult to fabricate but also proved to be unreliable. The method for pinning domain walls described in this thesis uses antiferromagnetic wires grown perpendicular to ferromagnetic wires so that exchange bias is induced at the crossing points. Exchange bias occurs when an antiferromagnet is in contact with a ferromagnet. When the structure is cooled in an applied field from near the Néel temperature of the antiferromagnet, the hysteresis loop shifts along the field axis resulting in pinning of the ferromagnetic layer. Multiple ferromagnetic materials were considered for the ferromagnetic layer. Initially unpinned ferromagnetic films were grown and characterised. Exchange biased films were then grown in configurations where the antiferromagnetic layer was either under or above the ferromagnetic layer but showed no major differences in the exchange bias. Ferromagnetic wires were patterned on Si substrates using e-beam and photolithography. Coercivity of the wires was measured along the length of the wires. Exchange biased wires in both top and bottom pin configurations were fabricated afterwards using the same methods and characterised using the same technique as the unbiased wires. The comparison between the biased and unbiased wires showed that domain walls can be pinned in nanowires using exchange bias. The top bias configuration showed a maximum value for pinning of about 55 Oe which is comparable to that initially reported in notched systems.
37

Efficient design and implementation of elliptic curve cryptography on FPGA

Khan, Zia January 2016 (has links)
This thesis is concerned with challenging the design space of Elliptic Curve Cryptography (ECC) over binary Galois Field, GF(2m) in hardware on field-programmable gate array (FPGA) in terms of area, speed and latency. Novel contributions have been made at the algorithmic, architectural and implementation levels that produced leading performance figures in terms of key hardware implementation metrics on FPGA. This demonstrated performance will enable ECC to be deployed across a range of application requiring public key security using FPGA technology. The proposed low area ECC implementation outperforms relevant state of the art in both area-time and area2-time metrics. The proposed high throughput ECC implementation adopts a new digit serial multiplier over GF(2m) incorporating a novel pipelining technique along with algorithmic and architectural level modification to support parallel operations in the arithmetic level. The resulting throughput/area performance outperforms state of the art designs on FPGA to date. The proposed high-speed only implementation utilises a new full-precision multiplier and smart point multiplication scheduling to reduce the latency. The resulting high speed ECC design with three multipliers achieves the lowest reported latency figure to date with high speed (450 clock cycles to get 2.83 μs on Virtex7). Finally, the proposed low resources scalable ECC implementation is based on very low latency multiprecision multiplication and low latency multiprecision squaring. The scalable ECC point multiplication design over all NIST curves consumes very low latency and shows the best area-time performance on FPGA to date.
38

Asynchrobatic logic for low-power VLSI design

Willingham, David John January 2010 (has links)
In this work, Asynchrobatic Logic is presented. It is a novel low-power design style that combines the energy saving benefits of asynchronous logic and adiabatic logic to produce systems whose power dissipation is reduced in several different ways. The term “Asynchrobatic” is a new word that can be used to describe these types of systems, and is derived from the concatenation and shortening of Asynchronous, Adiabatic Logic. This thesis introduces the concept and theory behind Asynchrobatic Logic. It first provides an introductory background to both underlying parent technologies (asynchronous logic and adiabatic logic). The background material continues with an explanation of a number of possible methods for designing complex data-path cells used in the adiabatic data-path. Asynchrobatic Logic is then introduced as a comparison between asynchronous and Asynchrobatic buffer chains, showing that for wide systems, it operates more efficiently. Two more-complex sub-systems are presented, firstly a layout implementation of the substitution boxes from the Twofish encryption algorithm, and secondly a front-end only (without parasitic capacitances, resistances) simulation that demonstrates a functional system capable of calculating the Greatest Common Denominator (GCD) of a pair of 16-bit unsigned integers, which under typical conditions on a 0.35μm process, executed a test vector requiring twenty-four iterations in 2.067μs with a power consumption of 3.257nW. These examples show that the concept of Asynchrobatic Logic has the potential to be used in real-world applications, and is not just theory without application. At the time of its first publication in 2004, Asynchrobatic Logic was both unique and ground-breaking, as this was the first time that consideration had been given to operating large-scale adiabatic logic in an asynchronous fashion, and the first time that Asynchronous Stepwise Charging (ASWC) had been used to drive an adiabatic data-path.
39

Plasma physics computations on emerging hardware architectures

Chorley, Joanne Clare January 2016 (has links)
This thesis explores the potential of emerging hardware architectures to increase the impact of high performance computing in fusion plasma physics research. For next generation tokamaks like ITER, realistic simulations and data-processing tasks will become significantly more demanding of computational resources than current facilities. It is therefore essential to investigate how emerging hardware such as the graphics processing unit (GPU) and field-programmable gate array (FPGA) can provide the required computing power for large data-processing tasks and large scale simulations in plasma physics specific computations. The use of emerging technology is investigated in three areas relevant to nuclear fusion: (i) a GPU is used to process the large amount of raw data produced by the synthetic aperture microwave imaging (SAMI) plasma diagnostic, (ii) the use of a GPU to accelerate the solution of the Bateman equations which model the evolution of nuclide number densities when subjected to neutron irradiation in tokamaks, and (iii) an FPGA-based dataflow engine is applied to compute massive matrix multiplications, a feature of many computational problems in fusion and more generally in scientific computing. The GPU data processing code for SAMI provides a 60x acceleration over the previous IDL-based code, enabling inter-shot analysis in future campaigns and the data-mining (and therefore analysis) of stored raw data from previous MAST campaigns. The feasibility of porting the whole Bateman solver to a GPU system is demonstrated and verified against the industry standard FISPACT code. Finally a dataflow approach to matrix multiplication is shown to provide a substantial acceleration compared to CPU-based approaches and, whilst not performing as well as a GPU for this particular problem, is shown to be much more energy efficient. Emerging hardware technologies will no doubt continue to provide a positive contribution in terms of performance to many areas of fusion research and several exciting new developments are on the horizon with tighter integration of GPUs and FPGAs with their host central processor units. This should not only improve performance and reduce data transfer bottlenecks, but also allow more user-friendly programming tools to be developed. All of this has implications for ITER and beyond where emerging hardware technologies will no doubt provide the key to delivering the computing power required to handle the large amounts of data and more realistic simulations demanded by these complex systems.
40

Lightweight physical unclonable functions circuit design and analysis

Gu, Chongyan January 2016 (has links)
With the increasing emergence of mobile electronic devices over the last two decades, they are now ubiquitous and can be found in our homes, our cars, our workplaces etc., and have the potential to revolutionise how we interact with the world today. This has led to a high demand for cryptographic devices that can provide authentication to protect user privacy and data security; however conventional cryptographic approaches suffer from a number of shortcomings. Also, today’s mobile devices are low-cost, low-power, embedded devices that are restricted both in memory and computing power. Hence, conventional cryptographic approaches are typically unsuitable as they incur significant timing, energy and area overhead. Physical unclonable functions (PUFs) are a novel security primitive which utilise the inherent variations that occur during manufacturing processing in order to generate a unique intrinsic identifier for a device. This gives it an advantage over current state-of-the-art alternatives. No special manufacturing processes are required to integrate a PUF into a design lowering the overall cost of the 1C, and everything can be kept on-chip enabling the PUF to be utilised as a hardware root of trust for all security or identity related operations on the device. This enables a multitude of higher level operations based on secure key storage and chip authentication. However, the design and implementation of PUF digital circuits is challenging, particularly for Field Programmable Gate Array (FPGA) devices. Since the circuits depend upon process variations, even small changes in environmental conditions, such as voltage or temperature, or unbalanced design that introduces skew, will affect their performance. In this thesis, a number of novel lightweight PUF techniques are proposed and experimentally validated. Furthermore, previously reported PUF techniques are evaluated and compared with the proposed designs in terms of efficiency and a range of performance metrics.

Page generated in 0.0212 seconds