1 |
Virtual prototyping of embedded real-time systemsTrignano, Vincenzo January 2005 (has links)
This thesis presents the ViPERS (Virtual Prototyping of Embedded Real-time Systems) virtual prototyping methodology. The concepts, the implementation, and the experiments presented in this thesis were developed at the University of Sussex (UoS) in the Centre for VLSI and Computer Graphics and were part of an EU funded project. ViPERS refers to a design methodology which links the graphical and interactive features of virtual prototyping techniques with key design trends for SoCs. System level design is a widely adopted approach to deal with the complexity, short time-to-market, and heterogeneous nature of today's electronic systems. The integration of virtual prototyping with the SystemC design methodology is proposed to deal with issues such as the modelling of the interfaces and the exploration of user-machine interaction which are becoming of vital importance in embedded handheld devices. An exploration of virtual prototyping and SOC design is illustrated in the thesis to provide an understanding of the objectives of the ViPERS methodology. The ViPERS approach is assisted by a framework which provides the necessary tools needed for the implementation and simulation of virtual prototypes in the different phases of the suggested design flow. After illustrating the ViPERS methodology and environment the focus of this thesis will be on the links provided within the ViPERS framework to allow communication between the graphical models and the underlying functional counterparts. A key contribution includes the development of a design flow for SoCs, with special focus on the communication infrastructure which enables the graphical and functional models to interact at simulation time. Another contribution in this thesis is the illustrative case study where interactive photorealistic models of electronic handheld devices are combined with their respective functional models implemented in SystemC and UML. The case study presents the design of an RF-based remote control for home automation.
|
2 |
An analogue VLSI implementation of an integrate-and-fire neural network for real-time auditory data processingGlover, Mark A. January 1999 (has links)
Neuromorphic engineering is the modelling of neurobiological systems in silicon and / or through computer simulation. It draws inspiration from neurobiology in an attempt to gain insight into biological processes, or to develop novel circuits and solutions to real engineering problems. This thesis describes the development of a hardware implementation of the integrate-and-fire neural network stage of the sound segmentation algorithm. It discusses issues involved in the development of novel aVLSI techniques capable of modelling biologically realistic processes. Comparisons are made between hardware and software implementation of the integrate-and-fire neural networks for real time data. Various approaches have been investigated. These include: fixed / variable interconnection, fixed / variable weight strengths, programmable biologically realistic time constants. Cascading techniques have been used to investigate the feasibility of inter-chip communication. Arguments are proposed for design modifications. Results show that aVLSI circuits can be produced which will realistically model biological processes and that there is a high degree of similarity between hardware and software implementation of the integrate-and-fire neural network. Suggestions are made for further investigation and work which include circuit modifications to increase flexibility and performance.
|
3 |
Design and performance characterisation of a modular surveillance system for a distributed processing platformRobinson, Mike January 2001 (has links)
This thesis investigates pedestrian monitoring using image processing for state-of-the-art real-time distributed camera-processor architectures. An integrated design and evaluation process is proposed, where the surveillance task is analysed into component modules, each corresponding to a self-contained vision process. Different approaches to each process are implemented independently, using Object-oriented design principles to facilitate both system construction and module interchange during comparative testing. Standard algorithms, from the computer vision literature, together with novel variants are used but the scope is restricted to what can be implemented to run in real-time, on a modest image processing engine. Comparison is made between median-based and mixture-of-Gaussian based methods for background representation, between connected component and boundary following approaches.to object segmentation and between pixel-based and 2-D model based (PCA with cubic splines) methods for object classification. Quantitative performance-characterization data for existing solutions is not generally available, in the literature, in the form of bench-mark test-sequences and is time-consuming and costly to produce, for novel methods. A substantial test-data-set of real surveillance image-sequences has been acquired, to test; the system and compare alternative approaches. A novel performance-characterization technique is proposed: it offers comparative quantitative evaluation of the performance and resource requirements of a system. This approach is applied to the different system variants, comprised of the alternative module combinations. The results of running the system variants on the test data are compared against manually derived ground-truth data for pedestrian detection. The performance characterization approach provided clear comparative data on performance and resource requirements for each variant, analysed by scene and event type. From a review of these results, the optimum module combination is chosen: this is a system composed of median-based background representation, boundary following object segmentation and model-based object classification.
|
4 |
Toward a real-time computational analysis middleware for scientific observation that employ sensor & mobile networksGoldsby, Michael E. January 2010 (has links)
Large-scale scientific applications require new middleware paradigms that do not suffer from the limitations of traditional request/reply middleware. These limitations include tight coupling between components, a lack of information filteringlanalytic capabilities, the ability to offer autonomous behaviour and support for many-to-many communication semantics. The present author argues that event-based agent middleware is a scalable and powerful new type of middleware for building largescale scientific applications that require real-time stream analysis facilities for sensor/mobile networks. However, it is important that an event-based agent middleware platform include all of the standard functionality that an application programmer expects from middleware. In this thesis, the present author describes the design and implementation of GISMO, a distributed, event-based agent middleware. The scalability, expressiveness and flexibility of GISMO are illustrated throughout this thesis for two application domains: Internet-wide scientific sensor-based observatory and ad-hoc mobile network for healthcare. GISMO follows a type- and content-based publish/subscribe model that emphasises programming language integration by supporting type-checking of event data and event type inheritance. Further attention is given to the integration of predictive analytics into the pattern detection and discovery process. In order to address dynamic, large-scale environments, GISMO uses peer-to-peer techniques combined with agency concepts for automatic management of its overlay network of event brokers and for scalable event distribution. Event routing is achieved by utilising specialised tuple spaces that introduce scoping mechanisms to reduce routing state in the system. This is achieved without compromising scalability and efficiency, as is shown by the evaluation of GISMO. The core functionality of the agent environment is extended by introducing four lower-level middleware services that address different requirements in a distributed computing environment. The first is a novel messaging service that attempts to avoid network congestion in the overlay by pushing change in consumer interest in a given pattern closer to the data source, and therefore enables a resource-efficient deployment of middleware. The expressiveness of subscriptions in the event-based agent middleware is enhanced with a composite event analytic service that performs hte distributed detection and discovery of complex event patterns, thus removing the burden from the clients. A new service/component composition model service is added in order to manage the introduction of new types without requiring system shut-down and complex redeployment of packages. Finally, a standardised middleware integration service is made available that allows agents to interact with agents in other agent environments, as well as external legacy applications and services.
|
5 |
Probabilistic schedulability and quality of service analysis in real-time systemsLeulseged, Amare Mengesha January 2005 (has links)
No description available.
|
6 |
Signal processing for real-time inspection of moving surfacesSoloviev, Dmitri January 2001 (has links)
No description available.
|
7 |
The virtual node approach to real-time, fault-tolerant distributed AdaHutcheon, Andrew David January 1993 (has links)
No description available.
|
8 |
Fault tolerance in fixed-priority hard real-time distributed systemsLima, George Marconi de ArauÌjo January 2003 (has links)
No description available.
|
9 |
A virtual machine for high integrity real-time systemsCai, Hao January 2005 (has links)
No description available.
|
10 |
Worst-case execution time analysis for dynamic branch predictorsReutemann, Ralf Dieter January 2008 (has links)
No description available.
|
Page generated in 0.0332 seconds