• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 127
  • 28
  • 21
  • 7
  • 1
  • Tagged with
  • 855
  • 334
  • 323
  • 318
  • 317
  • 317
  • 317
  • 313
  • 313
  • 312
  • 311
  • 311
  • 311
  • 311
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A study of the scaling and advanced functionality potential of phase change memory devices

Hayat, Hasan January 2016 (has links)
As traditional volatile and non-volatile data storage and memory technologies such as SRAM, DRAM, Flash and HDD face fundamental scaling challenges, scientists and engineers are forced to search for and develop alternative technologies for future electronic and computing systems that are relatively free from scaling issues, have lower power consumptions, higher storage densities, faster speeds, and can be easily integrated on-chip with microprocessor cores. This thesis focuses on the scaling and advanced functionality potential of one such memory technology i.e. Phase Change Memory (PCM), which is a leading contender to complement or even replace the above mentioned traditional technologies. In the first part of the thesis, a physically-realistic Multiphysics Cellular Automata PCM device modelling approach was used to study the scaling potential of conventional and commercially-viable PCM devices. It was demonstrated that mushroom-type and patterned probe PCM devices can indeed be scaled down to ultrasmall (single-nanometer) dimensions, and in doing so, ultralow programming currents (sub-20 μA) and ultrahigh storage densities (~10 Tb/in2) can be achieved via such a scaling process. Our sophisticated modelling approach also provided a detailed insight into some key PCM device characteristics, such as amorphization (Reset) and crystallization (Set) kinetics, thermal confinement, and the important resistance window i.e. difference in resistances between the Reset and Set states. In the second part of the thesis, the aforementioned modelling approach was used to assess the feasibility of some advanced functionalities of PCM devices, such as neuromorphic computing and phase change metadevices. It was demonstrated that by utilizing the accumulation mode of operation inherent to phase change materials, we can combine a physical PCM device with an external comparator-type circuit to deliver a ‘self-resetting spiking phase change neuron’, which when combined with phase change synapses can potentially open a new route for the realization of all-phase change neuromorphic computers. It was further shown that it is indeed feasible to design and ‘electrically’ switch practicable phase change metadevices (for absorber and modulator applications, and suited to operation in the technologically important near-infrared range of the spectrum). Finally, it was demonstrated that the Gillespie Cellular Automata (GCA) phase change model is capable of exhibiting ‘non-Arrhenius kinetics of crystallization’, which were found to be in good agreement with reported experimental studies.
42

Crowdsourced intuitive visual design feedback

Robb, David Allan January 2015 (has links)
For many people images are a medium preferable to text and yet, with the exception of star ratings, most formats for conventional computer mediated feedback focus on text. This thesis develops a new method of crowd feedback for designers based on images. Visual summaries are generated from a crowd’s feedback images chosen in response to a design. The summaries provide the designer with impressionistic and inspiring visual feedback. The thesis sets out the motivation for this new method, describes the development of perceptually organised image sets and a summarisation algorithm to implement it. Evaluation studies are reported which, through a mixed methods approach, provide evidence of the validity and potential of the new image-based feedback method. It is concluded that the visual feedback method would be more appealing than text for that section of the population who may be of a visual cognitive style. Indeed the evaluation studies are evidence that such users believe images are as good as text when communicating their emotional reaction about a design. Designer participants reported being inspired by the visual feedback where, comparably, they were not inspired by text. They also reported that the feedback can represent the perceived mood in their designs, and that they would be enthusiastic users of a service offering this new form of visual design feedback.
43

Fault detection, isolation and recovery schemes for spaceborne reconfigurable FPGA-based systems

Siegle, Felix January 2016 (has links)
This research contributes to a better understanding of how reconfigurable Field Programmable Gate Array (FPGA) devices can safely be used as part of satellite payload data processing systems that are exposed to the harsh radiation environment in space. Despite a growing number of publications about low-level mitigation techniques, only few studies are concerned with high-level Fault Detection, Isolation and Recovery (FDIR) methods, which are applied to FPGAs in a similar way as they are applied to other systems on board spacecraft. This PhD thesis contains several original contributions to knowledge in this field. First, a novel Distributed Failure Detection method is proposed, which applies FDIR techniques to multi-FPGA systems by shifting failure detection mechanisms to a higher intercommunication network level. By doing so, the proposed approach scales better than other approaches with larger and complex systems since data processing hardware blocks, to which FDIR is applied, can easily be distributed over the intercommunication network. Secondly, an innovative Availability Analysis method is proposed that allows a comparison of these FDIR techniques in terms of their reliability performance. Furthermore, it can be used to predict the reliability of a specific hardware block in a particular radiation environment. Finally, the proposed methods were implemented as part of a proof of concept system: On the one hand, this system enabled a fair comparison of different FDIR configurations in terms of power, area and performance overhead. On the other hand, the proposed methods were all successfully validated by conducting an accelerated proton irradiation test campaign, in which parts of this system were exposed to the proton beam while the proof of concept application was actively running.
44

Improving memory access performance for irregular algorithms in heterogeneous CPU/FPGA systems

Bean, Andrew January 2016 (has links)
Many algorithms and applications in scientific computing exhibit irregular access patterns as consecutive accesses are dependent on the structure of the data being processed and as such cannot be known a priori. This manifests itself as a lack of temporal and spatial locality meaning these applications often perform poorly in traditional processor cache hierarchies. This thesis demonstrates that heterogeneous architectures containing Field Programmable Gate Arrays (FPGAs) alongside traditional processors can improve memory access throughput by 2-3x by using the FPGA to insert data directly into the processor cache, eliminating costly cache misses. When fetching data to be processed directly on the FPGA, scatter-gather Direct Memory Access (DMA) provides the best performance but its storage format is inefficient for these classes of applications. The presented optimised storage and generation of these descriptors on-demand leads to a 16x reduction in on-chip Block RAM usage and a 2/3 reduction in data transfer time. Traditional scatter-gather DMA requires a statically defined list of access instructions and is managed by a host processor. The system presented in this thesis expands the DMA operation to allow data-driven memory requests in response to processed data and brings all control on-chip allowing autonomous operation. This dramatically increases system flexibility and provides a further 11% performance improvement. Graph applications and algorithms for traversing and searching graph data are used throughout this thesis as a motivating example for the optimisations presented, though they should be equally applicable to a wide range of irregular applications within scientific computing.
45

Formally modelling and verifying the FreeRTOS real-time operating system

Cheng, Shu January 2014 (has links)
Formal methods is an alternative way to develop software, which applies math- ematical techniques to software design and verification. It ensures logical consistency between the requirements and the behaviour of the software, because each step in the development process, i.e., abstracting the requirements, design, refinement and implementation, is verified by mathematical techniques. However, in ordinary software development, the correctness of the software is tested at the end of the development process, which means it is limited and incomplete. Thus formal methods provides higher quality software than ordinary software devel- opment. At the same time, real-time operating systems are playing increasingly more important roles in embedded applications. Formal verification of this kind of software is therefore of strong interest. FreeRTOS has a wide community of users: it is regarded by many as the de facto standard for micro-controllers in embedded applications. This project formally specifies the behaviour of FreeRTOS in Z, and its consistency is ver- ified using the Z/Eves theorem prover. This includes a precise statement of the preconditions for all API commands. Based on this model, (a) code-level annotations for verifying task related API are produced with Microsoft’s Verifying C Complier (VCC); and (b) an abstract model for extension of FreeRTOS to multi-core architectures is specified with the Z notation. This work forms the basis of future work that is refinement of the models to code to produce a verified implementation for both single and multi-core platforms.
46

Atomistic models of magnetic systems with combined ferromagnetic and antiferromagnetic order

Barker, Joseph January 2013 (has links)
There has long been an interest in the exploitation of novel magnetic behaviour for practical applications such as magnetic storage devices. Some of the most interesting dynamical behaviour occurs when a material contains both ferromagnetic (FM) and antiferromagnetic (AF) characteristics. Many such systems have forgone study due to practical difficulties of experimental observation of antiferromagnetic order. In this work several systems of current interest which contain both AF and FM order are studied. These materials and systems are used, or are candidates for, technological applications, especially magnetic storage devices. The forefront in this area is concerned with laser induce magnetisation reversal and there are many unexplained phenomena, especially in ferrimagnetic and metamagnetic materials. A combination of analytical and large scale numerical calculations are used, often with comparison to experimental data where available. The approach used is generally based around so-called atomistic spin dynamics, where the Landau-Lifshitz-Gilbert equation, augmented with a Langevin term, is solved for each atomic moment. This allows the description of magnetic materials at elevated temperatures and through phase transitions. Semi-analytic formalisms are studied, comparing with atomistic spin dynamics and micromagnetics, to inform multiscale modelling techniques. The excitation of a localised mode an antiferromagnetic layer which is coupled to a ferro- magnetic layer is studied. It is shown that this excitation leads to an enhanced damping of the ferromagnet, an important consideration for the design and optimisation of spin valves. The metamagnet FeRh which undergoes an antiferromagnetic-ferromagnetic phase transition is also investigated. There is much debate about the origin of the phase transition and a model is constructed in this work which demonstrates that an all magnetic origin is possible if effective four spin exchange terms are considered. This model is also capable of explaining the observed dynamics in femtosecond laser heating experiments. Finally, the spin wave dynamics of the prototypical amorphous ferrimagnet GdFeCo are considered. The thermally induced switching which has been discovered in this material is explained as the excitation of a two-magnon state.
47

Speckle pattern interferometry : vibration measurement based on a novel CMOS camera

Santonocito, Daniele January 2013 (has links)
A digital speckle pattern interferometer based on a novel custom complementary metaloxide-semiconductor (CMOS) array detector is described. The temporal evolution of the dynamic deformation of a test object is measured using inter-frame phase stepping. The flexibility of the CMOS detector is used to identify regions of interest with full-field time averaged measurements and then to interrogate those regions with time-resolved measurements sampled at up to 7 kHz. The maximum surface velocity that can be measured and the number of measurement points are limited by the frame rate and the data transfer rate of the detector. The custom sensor used in this work is a modulated light camera (MLC), whose pixel design is still based on the standard four transistor active pixel sensor (APS), but each pixel has four large independently shuttered capacitors that drastically boost the well capacity from that of the diode alone. Each capacitor represents a channel which has its own shutter switch and can either be operated independently or in tandem with others. The particular APS of this camera enables a novel approach in how the data are acquired and then processed. In this Thesis we demonstrate how, at a given frame rate and at a given number of measurement points, the data transfer rate of our system is increased if compared to the data transfer rate of a system using a standard approach. Moreover, under some assumptions, the gain in system bandwidth doesn’t entail any reduction in the maximum surface velocity that can be reliably measured with inter-frame phase stepping.
48

Reconstruction, identification and implementation methods for spiking neural circuits

Florescu, Dorian January 2016 (has links)
Integrate-and-fire (IF) neurons are time encoding machines (TEMs) that convert the amplitude of an analog signal into a non-uniform, strictly increasing sequence of spike times. This thesis addresses three major issues in the field of computational neuroscience as well as neuromorphic engineering. The first problem is concerned with the formulation of the encoding performed by an IF neuron. The encoding mechanism is described mathematically by the t-transform equation, whose standard formulation is given by the projection of the stimulus onto a set of input dependent frame functions. As a consequence, the standard methods reconstruct the input of an IF neuron in a space spanned by a set of functions that depend on the stimulus. The process becomes computationally demanding when performing reconstruction from long sequences of spike times. The issue is addressed in this work by developing a new framework in which the IF encoding process is formulated as a problem of uniform sampling on a set of input independent time points. Based on this formulation, new algorithms are introduced for reconstructing the input of an IF neuron belonging to bandlimited as well as shift-invariant spaces. The algorithms are significantly faster, whilst providing a similar level of accuracy, compared to the standard reconstruction methods. Another important issue calls for inferring mathematical models for sensory processing systems directly from input-output observations. This problem was addressed before by performing identification of sensory circuits consisting of linear filters in series with ideal IF neurons, by reformulating the identification problem as one of stimulus reconstruction. The result was extended to circuits in which the ideal IF neuron was replaced by more biophysically realistic models, under the additional assumptions that the spiking neuron parameters are known a priori, or that input-output measurements of the spiking neuron are available. This thesis develops two new identification methodologies for [Nonlinear Filter]-[Ideal IF] and [Linear Filter]-[Leaky IF] circuits consisting of two steps: the estimation of the spiking neuron parameters and the identification of the filter. The methodologies are based on the reformulation of the circuit as a scaled filter in series with a modified spiking neuron. The first methodology identifies an unknown [Nonlinear Filter]-[Ideal IF] circuit from input-output data. The scaled nonlinear filter is estimated using the NARMAX identification methodology for the reconstructed filter output. The [Linear Filter]-[Leaky IF] circuit is identified with the second proposed methodology by first estimating the leaky IF parameters with arbitrary precision using specific stimuli sequences. The filter is subsequently identified using the NARMAX identification methodology. The third problem addressed in this work is given by the need of developing neuromorphic engineering circuits that perform mathematical computations in the spike domain. In this respect, this thesis developed a new representation between the time encoded input and output of a linear filter, where the TEM is represented by an ideal IF neuron. A new practical algorithm is developed based on this representation. The proposed algorithm is significantly faster than the alternative approach, which involves reconstructing the input, simulating the linear filter, and subsequently encoding the resulting output into a spike train.
49

Algorithms and architectures for MCMC acceleration in FPGAs

Mingas, Grigorios January 2015 (has links)
Markov Chain Monte Carlo (MCMC) is a family of stochastic algorithms which are used to draw random samples from arbitrary probability distributions. This task is necessary to solve a variety of problems in Bayesian modelling, e.g. prediction and model comparison, making MCMC a fundamental tool in modern statistics. Nevertheless, due to the increasing complexity of Bayesian models, the explosion in the amount of data they need to handle and the computational intensity of many MCMC algorithms, performing MCMC-based inference is often impractical in real applications. This thesis tackles this computational problem by proposing Field Programmable Gate Array (FPGA) architectures for accelerating MCMC and by designing novel MCMC algorithms and optimization methodologies which are tailored for FPGA implementation. The contributions of this work include: 1) An FPGA architecture for the Population-based MCMC algorithm, along with two modified versions of the algorithm which use custom arithmetic precision in large parts of the implementation without introducing error in the output. Mapping the two modified versions to an FPGA allows for more parallel modules to be instantiated in the same chip area. 2) An FPGA architecture for the Particle MCMC algorithm, along with a novel algorithm which combines Particle MCMC and Population-based MCMC to tackle multi-modal distributions. A proposed FPGA architecture for the new algorithm achieves higher datapath utilization than the Particle MCMC architecture. 3) A generic method to optimize the arithmetic precision of any MCMC algorithm that is implemented on FPGAs. The method selects the minimum precision among a given set of precisions, while guaranteeing a user-defined bound on the output error. By applying the above techniques to large-scale Bayesian problems, it is shown that significant speedups (one or two orders of magnitude) are possible compared to state-of-the-art MCMC algorithms implemented on CPUs and GPUs, opening the way for handling complex statistical analyses in the era of ubiquitous, ever-increasing data.
50

Structure and functionality of novel nanocomposite granules for a pressure-sensitive ink with applications in touchscreen technologies

Dempsey, Sarah Jessica January 2016 (has links)
Tactile sensors are now ubiquitous within human-computer interactions, where mouse and keyboard functionality can be replaced with a trackpad or touchscreen sensor. In most technologies the sensor can detect the touch location only, with no information given on the force of the touch. In this thesis, functional components of a novel nanocomposite ink are developed, which when printed, form a pressure-sensitive interface which can detect both touch location and touch force. The physical basis of the force-sensitive response is investigated for the touchscreen sensor as a whole, as well as the intrinsic force-sensitivity of the ink components. In an earlier form the nanocomposite ink, that was the starting point of this study, contained agglomerates of conductive nanoparticles which were formed during blending of the ink, and provided the electrical functionality of the sensor. Here, novel nanocomposite granules were pre-fabricated prior to inclusion in the ink. The granules were designed such that they exhibited well-defined size, structure and strength. Control of these parameters was achieved through selection of the granule constituents, as well as the energy and duration of the granulation process. When incorporated into the ink and screen-printed to form a pressure-sensitive layer in a touchscreen test device, the functional performance could be assessed. Sensors containing pre-formed granules showed improved optical transmission, compared to sensors containing the same mass loading of nanoparticles forming spontaneous agglomerates. Agglomerates tend to create a larger number of small scattering centres which scatter light to larger angles. The spatial variation in the force-resistance response, as well as the sensitivity of this response, was also linked to the distribution of the granules within the pressure-sensitive layer. The physical basis of the force-resistance response is two-fold. Firstly, mathematical simulations showed that deflection of the upper electrode increased the number of granules contacted with increasing applied force and therefore decreased the resistance through the sensor. Secondly, a force-sensitive resistance of the granules themselves was also observed at high forces. Analysis of the non-linear current-voltage characteristics suggested the presence of non-linear conduction pathways within the granules. Using a random resistor network model, the non-linear current contribution decreased after approximately 0.7 N force. To understand this effect, a model based on the physical basis of quantum tunnelling mechanisms was also applied, however this provided a poor fit to the data and no further understanding could be gained.

Page generated in 0.0281 seconds