11 |
Flexibility in dependable real-time communicationBroster, Ian January 2004 (has links)
No description available.
|
12 |
Real-time processor architectures for worst case execution time reductionWhitham, Jack January 2008 (has links)
No description available.
|
13 |
Real-time algorithms for optimal CCD data reduction in high energy astronomyWelch, Stephen James January 2001 (has links)
No description available.
|
14 |
Real-time computer animations and the study of visual responses in crabsJohnson, Aaron Paul January 2001 (has links)
No description available.
|
15 |
Rapid generation of hardware functionality in DSP systems on heterogeneous platformsReilly, Darren Gerard January 2006 (has links)
No description available.
|
16 |
Automated analysis of real-time software propertiesGorry, Benjamin John McEwan January 2007 (has links)
Testing is the traditional method of showing correctness of real-time software systems but can prove expensive in terms of time and resources. Formal approaches to real-time verification have fallen short. Reasons for this are: • formal models have to be manually constructed, • the level of expertise required to develop tractable models can be considerable, • it can be difficult to gather accurate timing information for the system being analysed, and • it can often be difficult to relate the results from analysis to sections of system source code. A multi-perspective annotation-driven approach called PARTES is presented. PARTES extracts functional and timing information from source code. This information is then used to develop Promela models for analysis with SPIN, and CSPL models which are subjected to sensitivity analysis via SPNP. The results from analysis can be related directly back to annotated sections of source code. , The analysis identifies any potentially problematic timing areas within a system, aiming to inform testing and reduce the amount of resources required during testing.
|
17 |
Techniques for enhancing the temporal predictability of real-time embedded systems employing a time-triggered software architectureMaaita, Adi Abdelhalim January 2009 (has links)
This thesis is concerned with the design and implementation of single-processor embedded real-time systems with highly predictable behaviour and strict constraints on resource usage. The main aim of this research is to identify the sources of unpredictable behaviour in such systems – exhibited as timing jitter - when a time-triggered pre-emptive task scheduling approach is adopted, and then provide software based techniques to enhance their temporal predictability. The thesis provides a review of related previous work on predictable real-time task scheduling, as well as resource-access control methods for maintaining predictable real-time system behaviour through the prevention of priority inversion and other related problems. The design and implementation of the time-triggered hybrid (TTH), time-triggered rate-monotonic (TTRM), and time-triggered deadline-monotonic (TTDM) task schedulers is discussed in detail as they provide the most predictable behaviour within the category of pre-emptive task schedulers. For that reason, they will be used as the software platforms in the experimental part of this research. Two novel software techniques for enhancing the temporal predictability in systems utilising time-triggered schedulers are introduced. The first software technique presented is a resource-access control protocol named Timed Resource-Access Protocol (TRAP). This protocol is designed to avoid the problems of priority inversion, chained blocking and deadlocks while coercing system tasks to exhibit timing predictability that is proportional to their significance in the system. This appears in the decreasing levels of task finishing jitter as the significance of tasks in the system increases. The second technique is named Planned Pre-emption (PP). This technique is aimed at eliminating the scheduling unpredictability due to variable timer interrupt service time in time-triggered scheduling systems. The impact of this technique appears in the considerable reduction in scheduler task release jitter. Finally, the thesis is concluded by a discussion and a summary of the work presented.
|
18 |
A visual framework for formal systems development using interval temporal logicChakrapani Rao, Arun January 2002 (has links)
No description available.
|
19 |
The application of neural computation methods to forecasting and monitoring in an airline contextCumming, Simon Nicholas January 1998 (has links)
This thesis examines the applicability of artificial neural networks (neural computation methods) to tasks of forecasting and condition monitoring involved real-world data in the context of a large airline business. The first chapter introduces artificial neural networks (concentrating on multilayer perceptron, radial basis function and self-organising map networks), and provides some motivation for their use in problems of statistical estimation and monitoring. the second chapter gives a case study of the estimation of airline booking take-up from booking attributes held in a reservations system, comparing multilayer perceptrons and radial basis function networks, used in a classification regime, with the statistical method of Automatic Interaction Detection (AID) and lookup table and moving average methods. Some consideration is given to how the outputs of the neural network should be interpreted and to application-specific issues. The third chapter introduces the task of aircraft engine condition monitoring, gives a literature survey of the use of neural networks and related methods in condition monitoring and considers the applicability of various neural network approaches to the aircraft engine monitoring problem. In the fourth chapter a method of context-based novelty detection using a combination of self-organising maps is developed, and this, together with classification and regression approaches to the tasks of condition monitoring and fault detection using artificial neural networks are illustrated with a series of case studies. The fifth chapter gives concluding remarks on the use of artificial neural networks for data-driven forecasting and monitoring tasks using operational data, and briefly considers software engineering and methodological issues.
|
20 |
Anomaly detection in non-stationary and distributed environmentsO'Reilly, Colin January 2014 (has links)
Anomaly detection is an important aspect of data analysis in order to identify data items that significantly differ from normal data. It is used in a variety of fields such as machine monitoring, environmental monitoring and security applications and is a well-studied area in the field of pattern recognition and machine learning. In this thesis, the key challenges of performing anomaly detection in non-stationary and distributed environments are addressed separately. In non-stationary environments the data distribution may alter, meaning that the concepts to be learned evolve in time. Anomaly detection techniques must be able to adapt to a non-stationary data distribution in order to perform optimally. This requires an update to the model that is being used to classify data. A batch approach to the problem requires a reconstruction of the model each time an update is required. Incremental learning overcomes this issue by using the previous model as the basis for an update. Two kernel-based incremental anomaly detection techniques are proposed. One technique uses kernel principal component analysis to perform anomaly detection. The kernel eigenspace is incrementally updated by splitting and merging kernel eigenspaces. The technique is shown to be more accurate than current state-of-the-art solutions. The second technique offers a reduction in the number of computations by using an incrementally updated hypersphere in kernel space. In addition to updating a model, in a non-stationary environment an update to the parameters of the model are required. Anomaly detection algorithms require the selection of appropriate parameters in order to perform optimally for a given data set. If the distribution of the data changes, an update to the parameters of a model is required. An automatic parameter optimization procedure is proposed for the one-class quartersphere support vector machine where the v parameter is selected automatically based on the anomaly rate in the training set. In environments such as wireless sensor networks, data might be distributed amongst a number of nodes. In this case, distributed learning is required where nodes construct a classifier, or an approximation of the classifier, that would have been formed had all the data been available to one instance of the algorithm. A principal component analysis based anomaly detection method is proposed that uses the solution to a convex optimization problem. The convex optimization problem is then derived in a distributed form, with each node running a local instance of the algorithm. Nodes are able to iterate towards an anomaly detector equivalent to the global solution by exchanging short messages. Detailed evaluations of the proposed techniques are performed against existing state-of-the-art techniques using a variety of synthetic and real-world data sets. Results in the area of a non-stationary environment illustrate the necessity to adapt an anomaly detection model to the changing data distribution. It is shown that the proposed incremental techniques are maintain accuracy while reducing the number of computations. In addition, optimal parameters derived from an unlabelled training set are shown to exhibit superior performance to statically selected parameters. In the area of a distributed environment, it is shown that local learning is insufficient due to the lack of examples. Distributed learning can be performed in a manner where a centralized model can be derived by passing small amounts of information between neighbouring nodes. This approach yields a model that obtains performance equal to that of the centralized model.
|
Page generated in 0.3095 seconds