631 |
The reliability of memory systemsToutounjee, Mohamad Mazhar January 1988 (has links)
No description available.
|
632 |
A study of poststenotic shear layer instabilitiesTreiber, Jobst January 1988 (has links)
No description available.
|
633 |
Digital holographic analysis to detect and estimate motionNassar, Nahy Saad January 1990 (has links)
No description available.
|
634 |
Naturalness-preserving transform processing of digital signalsOsinubi, Adewole Olufolami January 1990 (has links)
No description available.
|
635 |
Optimality conditions and sensitivity relations in dynamic optimizationBabad, Hannah Rachel January 1991 (has links)
No description available.
|
636 |
Single event upset mitigation techniques in reconfigurable hardwareVavouras, Michail January 2017 (has links)
Advances in semiconductor technology using smaller sizes of transistors in order to fit more of them in the same area and increase performance, pose a threat for the reliability of integrated circuits. Technology scaling accelerates transistor ageing and degradation, causing more faults during the lifetime of an integrated circuit. Sources of faults such as manufacturing defects, degradation and ageing of transistors degrade the performance of integrated circuits leading to faults with a permanent effect that might be catastrophic for certain applications. A special case of integrated circuits, FPGAs, suffer from radiation-induced faults since they contain million of bits for the configuration of their resources that if flipped due to radiation might change the intended functionality of the application running on the FPGA, causing a failure. However, FPGAs can be dynamically reconfigured in the field and mitigate radiation effects providing fault-tolerance and high availability. A novel fault-tolerant architecture for an artificial pancreas application is proposed that consists of a mixed substrate of ASIC and FPGA. Fault detection is provided through modular redundancy, and dynamic reconfiguration is used as a repair mechanism. Experimental results show that 5,100x lower probability of failures per hour (PFH) than a DMR for permanent faults can be achieved with 2.4x more area than DMR. In addition, the proposed solution achieves 83x lower PFH than a TMR with 1.6x area overheads when considering transient faults. A framework supporting fault injection at the configuration memory of an SRAM FPGA and scrubbing was developed throughout this work. The framework supports various SEU and scrub rates and is implemented on the modern ZYNQ FPGA architecture. Existing scrubbing strategies were implemented for a second-order polynomial case study together with two new scrubbing techniques taking into consideration area information of the modules of the application. Experimental results show that the area-driven scrubbing technique achieves 43.6% LUTs and 40.9% REGs savings when compared to a DMR design. The area-driven technique for the partial TMR design saves 15% LUTs and 23% REGs area as compared to the TMR without sacrificing availability, but with increased power consumption for scrubbing. The conclusion of the work is that dynamic reconfiguration techniques can be effectively applied in FPGAs for trading-off resources and power consumption for availability.
|
637 |
Multi-agent based simulation of self-governing knowledge commonsMacbeth, Samuel January 2014 (has links)
The potential of user-generated sensor data for participatory sensing has motivated the formation of organisations focused on the exploitation of collected information and associated knowledge. Given the power and value of both the raw data and the derived knowledge, we advocate an open approach to data and intellectual-property rights. By treating user-generated content as well as derived information and knowledge as a common-pool resource, we hypothesise that all participants can be compensated fairly for their input. To test this hypothesis, we undertake an extensive review of experimental, commercial and social participatory-sensing applications, from which we identify that a decentralised, community-oriented governance model is required to support this open approach. We show that the Institutional Analysis and Design framework as introduced by Elinor Ostrom, in conjunction with a framework for self-organising electronic institutions, can be used to give both an architectural and algorithmic base for the necessary governance model, in terms of operational and collective choice rules specified in computational logic. As a basis for understanding the effect of governance on these applications, we develop a testbed which joins our logical formulation of the knowledge commons with a generic model of the participatory-sensing problem. This requires a multi-agent platform for the simulation of autonomous and dynamic agents, and a method of executing the logical calculus in which our electronic institution is specified. To this end, firstly, we develop a general purpose, high performance platform for multi-agent based simulation, Presage2. Secondly, we propose a method for translating event-calculus axioms into rules compatible with business rule engines, and provide an implementation for JBoss Drools along with a suite of modules for electronic institutions. Through our simulations we show that, when building electronic institutions for managing participatory sensing as a knowledge commons, proper enfranchisement of agents (as outlined in Ostrom's work) is key to striking a balance between endurance, fairness and reduction of greedy behaviour. We conclude with a set of guidelines for engineering knowledge commons for the next generation of participatory-sensing applications.
|
638 |
Optimal control of differential inclusionsPalladino, Michele January 2015 (has links)
The thesis concerns some recent advances on necessary conditions for optimal control problems, paying particular attention to the case in which the velocity constraint is expressed in terms of a multifunction. In the first part of the thesis we have explored the link which arises between relaxation and first order necessary conditions. Relaxation is a widely used regularization procedure in optimal control, involving the replacement of velocity sets by their convex hulls, to ensure the existence of a minimizer. It turns out that some pathological situations arise in which the costs of relaxed and original problems do not coincide (infimum gap conditions). In this case, we cannot obtain approximate solution of the optimal control problem of interest. In particular, we show how necessary conditions expressed in terms of Fully Convexified Hamiltonian Inclusion are a↵ected by the presence of an infimum gap. Applications of these results are showed also in the case in which the velocity constraint is expressed in terms of controlled di↵erential equations. In the second part of the thesis we study the regularity of the Hamiltonian along the optimal trajectory for problems with state constraint. Two applications of these properties are demonstrated. One is to derive improved conditions which guarantee the nondegeneracy of necessary conditions of optimality, in the form of a Hamiltonian inclusion. The other application is to derive new, less restrictive, conditions under which minimizers in the calculus of variations have bounded slope.
|
639 |
Parametric polyhedral optimisation for high-level synthesisLiu, Junyi January 2017 (has links)
High-level synthesis (HLS) improves hardware design productivity by using high-level programming languages for design entry. Although various automatic optimisations have been supported in modern HLS tools, manual effort is still required to achieve sufficient hardware acceleration. Loop pipelining is one of the most important opti- mization methods in HLS for increasing loop parallelism. In this thesis, we extend the capability of loop pipelining in HLS to handle loops with uncertain dependencies (i.e., parameterised by an undetermined variable) and/or non-uniform dependencies (i.e., varying between loop iterations). Our optimisations allow a pipeline to be scheduled without the aforementioned memory dependencies at compile time, but an associated controller will change the execution speed of loop iterations at runtime. A parametric polyhedral analysis is developed to generate the control logic and is integrated in an automated source-to-source code transformation framework. Experiments over a suite of benchmarks show that transformed pipelines can achieve 3.7-11× faster acceleration with a reasonable resource overhead. To tackle the challenge of memory and communication bottlenecks, we have also developed a tile size selection for loop tiling to improve data locality. The size of the tiles, which can significantly affect the memory requirement, is usually determined by partial enumeration. We propose an analytical methodology in this thesis to automate the selection of a tile size for optimised memory reuse in HLS. A new parametric polyhedral analysis for memory mapping is introduced to capture memory usage analytically for arbitrary tile sizes. To determine the tile size for data reuse in constrained on-chip memory, an algorithm is then developed to optimize over this model, using non-linear solvers to minimize communication overhead. We show experimentally that our tile size selection can quickly produce high-quality solutions associated with an efficient memory mapping.
|
640 |
Signal analysis of continuous blood pressure recordings in ambulant patientsCicchiello, L. R. January 1982 (has links)
No description available.
|
Page generated in 0.03 seconds