Spelling suggestions: "subject:"computer science. computer software."" "subject:"computer science. aomputer software.""
121 |
A fundamental study into the theory and application of the partial metric spacesO'Neill, Simon John January 1998 (has links)
Our aim is to establish the partial metric spaces within the context of Theoretical Computer Science. We present a thesis in which the big "idea" is to develop a more (classically) analytic approach to problems in Computer Science. The partial metric spaces are the means by which we discuss our ideas. We build directly on the initial work of Matthews and Wadge in this area. Wadge introduced the notion of healthy programs corresponding to complete elements in a semantic domain, and of size being the extent to which a point is complete. To extend these concepts to a wider context, Matthews placed this work in a generalised metric framework. The resulting partial metric axioms are the starting point for our own research. In an original presentation, we show that Ta-metrics are either quasi-metrics, if we discard symmetry, or partial metrics, if we allow non-zero self-distances. These self-distances are how we capture Wadge's notion of size (or weight) in an abstract setting, and Edalat's computational models of metric spaces are examples of partial metric spaces. Our contributions to the theory of partial metric spaces include abstracting their essential topological characteristics to develop the hierarchical spaces, investigating their To-topological properties, and developing metric notions such as completions. We identify a quantitative domain to be a continuous domain with a To-metric inducing the Scott topology, and introduce the weighted spaces as a special class of partial metric spaces derived from an auxiliary weight function. Developing a new area of application, we model deterministic Petri nets as dynamical systems, which we analyse to prove liveness properties of the nets. Generalising to the framework of weighted spaces, we can develop model-independent analytic techniques. To develop a framework in which we can perform the more difficult analysis required for non-deterministic Petri nets, we identify the measure-theoretic aspects of partial metric spaces as fundamental, and use valuations as the link between weight functions and information measures. We are led to develop a notion of local sobriety, which itself appears to be of interest.
|
122 |
Towards efficacious groupware development : an empirical modelling approachChan, Zhan En January 2009 (has links)
No description available.
|
123 |
Towards effective dynamic resource allocation for enterprise applicationsChester, Adam P. January 2011 (has links)
The growing use of online services requires substantial supporting infrastructure. The efficient deployment of applications relies on the cost effectiveness of commercial hosting providers who deliver an agreed quality of service as governed by a service level agreement for a fee. The priorities of the commercial hosting provider are to maximise revenue, by delivering agreed service levels, and minimise costs, through high resource utilisation. In order to deliver high service levels and resource utilisation, it may be necessary to reorganise resources during periods of high demand. This reorganisation process may be manual or alternatively controlled by an autonomous process governed by a dynamic resource allocation algorithm. Dynamic resource allocation has been shown to improve service levels and utilisation and hence, profitability. In this thesis several facets of dynamic resource allocation are examined to asses its suitability for the modern data centre. Firstly, three theoretically derived policies are implemented as a middleware for a modern multi-tier Web application and their performance is examined under a range of workloads in a real world test bed. The scalability of state-of-the art resource allocation policies are explored in two dimensions, namely the number of applications and the quantity of servers under control of the resources allocation policy. The results demonstrate that current policies presented in the literature demonstrate poor scalability in one or both of these dimensions. A new policy is proposed which has significantly improved scalability characteristics and the new policy is demonstrated at scale through simulation. The placement of applications in across a datacenter makes them susceptible to failures in shared infrastructure. To address this issue an application placement mechanism is developed to augment any dynamic resource allocation policy. The results of this placement mechanism demonstrate a significant improvement in the worst case when compared to a random allocation mechanism. A model for the reallocation of resources in a dynamic resource allocation system is also devised. The model demonstrates that the assumption of a constant resource reallocation cost is invalid under both physical reallocation and migration of virtualised resources.
|
124 |
Intelligent feature selection for neural regression : techniques and applicationsZhang, Fu January 2012 (has links)
Feature Selection (FS) and regression are two important technique categories in Data Mining (DM). In general, DM refers to the analysis of observational datasets to extract useful information and to summarise the data so that it can be more understandable and be used more efficiently in terms of storage and processing. FS is the technique of selecting a subset of features that are relevant to the development of learning models. Regression is the process of modelling and identifying the possible relationships between groups of features (variables). Comparing with the conventional techniques, Intelligent System Techniques (ISTs) are usually favourable due to their flexible capabilities for handling real‐life problems and the tolerance to data imprecision, uncertainty, partial truth, etc. This thesis introduces a novel hybrid intelligent technique, namely Sensitive Genetic Neural Optimisation (SGNO), which is capable of reducing the dimensionality of a dataset by identifying the most important group of features. The capability of SGNO is evaluated with four practical applications in three research areas, including plant science, civil engineering and economics. SGNO is constructed using three key techniques, known as the core modules, including Genetic Algorithm (GA), Neural Network (NN) and Sensitivity Analysis (SA). The GA module controls the progress of the algorithm and employs the NN module as its fitness function. The SA module quantifies the importance of each available variable using the results generated in the GA module. The global sensitivity scores of the variables are used determine the importance of the variables. Variables of higher sensitivity scores are considered to be more important than the variables with lower sensitivity scores. After determining the variables’ importance, the performance of SGNO is evaluated using the NN module that takes various numbers of variables with the highest global sensitivity scores as the inputs. In addition, the symbolic relationship between a group of variables with the highest global sensitivity scores and the model output is discovered using the Multiple‐Branch Encoded Genetic Programming (MBE‐GP). A total of four datasets have been used to evaluate the performance of SGNO. These datasets involve the prediction of short‐term greenhouse tomato yield, prediction of longitudinal dispersion coefficients in natural rivers, prediction of wave overtopping at coastal structures and the modelling of relationship between the growth of industrial inputs and the growth of the gross industrial output. SGNO was applied to all these datasets to explore its effectiveness of reducing the dimensionality of the datasets. The performance of SGNO is benchmarked with four dimensionality reduction techniques, including Backward Feature Selection (BFS), Forward Feature Selection (FFS), Principal Component Analysis (PCA) and Genetic Neural Mathematical Method (GNMM). The applications of SGNO on these datasets showed that SGNO is capable of identifying the most important feature groups of in the datasets effectively and the general performance of SGNO is better than those benchmarking techniques. Furthermore, the symbolic relationships discovered using MBE‐GP can generate performance competitive to the performance of NN models in terms of regression accuracies.
|
125 |
Supporting cooperation and coordination in open multi-agent systemsFranks, Henry P. W. January 2013 (has links)
Cooperation and coordination between agents are fundamental processes for increasing aggregate and individual benefit in open Multi-Agent Systems (MAS). The increased ubiquity, size, and complexity of open MAS in the modern world has prompted significant research interest in the mechanisms that underlie cooperative and coordinated behaviour. In open MAS, in which agents join and leave freely, we can assume the following properties: (i) there are no centralised authorities, (ii) agent authority is uniform, (iii) agents may be heterogeneously owned and designed, and may consequently have con icting intentions and inconsistent capabilities, and (iv) agents are constrained in interactions by a complex connecting network topology. Developing mechanisms to support cooperative and coordinated behaviour that remain effective under these assumptions remains an open research problem. Two of the major mechanisms by which cooperative and coordinated behaviour can be achieved are (i) trust and reputation, and (ii) norms and conventions. Trust and reputation, which support cooperative and coordinated behaviour through notions of reciprocity, are effective in protecting agents from malicious or selfish individuals, but their capabilities can be affected by a lack of information about potential partners and the impact of the underlying network structure. Regarding conventions and norms, there are still a wide variety of open research problems, including: (i) manipulating which convention or norm a population adopts, (ii) how to exploit knowledge of the underlying network structure to improve mechanism efficacy, and (iii) how conventions might be manipulated in the middle and latter stages of their lifecycle, when they have become established and stable. In this thesis, we address these issues and propose a number of techniques and theoretical advancements that help ensure the robustness and efficiency of these mechanisms in the context of open MAS, and demonstrate new techniques for manipulating convention emergence in large, distributed populations. Specfically, we (i) show that gossiping of reputation information can mitigate the detrimental effects of incomplete information on trust and reputation and reduce the impact of network structure, (ii) propose a new model of conventions that accounts for limitations in existing theories, (iii) show how to manipulate convention emergence using small groups of agents inserted by interested parties, (iv) demonstrate how to learn which locations in a network have the greatest capacity to in uence which convention a population adopts, and (v) show how conventions can be manipulated in the middle and latter stages of the convention lifecycle.
|
126 |
Performance evaluation and resource management in enterprise systemsXue, James Wen Jun January 2009 (has links)
This thesis documents research conducted as part of an EPSRC (EP/C53 8277/01) project whose aim was to understand, capture and dene the service requirements of cluster-supported enterprise systems. This research includes developing techniques to verify that the infrastructure is delivering on its agreed service requirements and a means of dynamically adjusting the operating policies if the service requirements are not being met. The research in this thesis falls into three broad categories: 1) the performance evaluation of data persistence in distributed enterprise applications; 2) Internet workload management and request scheduling; 3) dynamic resource allocation in server farms. Techniques for request scheduling and dynamic resource allocation are developed, with the aim of maximising the total revenue from dierent applications run in an Internet service hosting centre. Given that data is one of the most important assets of a company, it is essential that enterprise systems should be able to create, retrieve, update and delete data eectively. Web-based applications require application data and session data, and the persistence of these data is critical to the success of the business. However, data persistence comes at a cost as it introduces a performance overhead to the system. This thesis reports on research using state-of-the-art enterprise computing architectures to study the performance overheads of data persistence. Internet service providers (ISPs) are bound by quality of service (QoS) agreements with their clients. Since dierent applications serve various types of request, each with an associated value, some requests are more important than others in terms of revenue contribution. This thesis reports on the development of a priority, queue-based request scheduling scheme, which positions waiting requests in their relevant queues based on their priorities. In so doing, more important requests are processed sooner even though they may arrive in the system later than others. An experimental evaluation of this approach is conducted using an eventdriven simulator; the results demonstrate promising results over a number of existing methods in terms of revenue contribution. Due to the bursty nature of web-based workload, it is very diffcult to manage server resources in an Internet hosting centre. Static approaches such as resource provisioning either result in wasted resource (i.e., underutilisation in light loaded situations) or oer no help if resources are overutilised. Therefore, dynamic approaches to resource management are needed. This thesis proposes a bottleneck-aware, dynamic server switching policy, which is used in combination with an admission control scheme. The objective this scheme is to optimise the total revenue in the system, while maintaining the QoS agreed across all server pools in the hosting centre. A performance evaluation is conducted via extensive simulation, and the results show a considerable improvement from the bottleneck-aware server switching policy over a proportional allocation policy and a system that implements no dynamic server switching.
|
127 |
High-fidelity rendering and display of cultural heritageHappa, Jassim January 2011 (has links)
Many Cultural Heritage (CH) reconstructions today use black box rendering solutions with little regard to appropriate addition of lighting, material light reflectance properties or light transport algorithms. This may be in favour of faster computational performance or is simply not a priority (as long as the end result is visually convincing). This can lead to misrepresenting CH environments, both in their present and past forms. The handful of publications that do pay special attention to lighting, emphasise on case specific problems rather than attempting to generalise a rendering pipeline tailored to the needs of CH scenes. The dissertation presents a research framework to render CH scenes appropriately and novel approaches to document, estimate and accelerate global illumination for virtual archaeology purposes. First, three reconstruction case studies with an unbiased rendering pipeline in mind are presented. Second, a research framework to reverse-engineer the past (through high-fidelity rendering) is overviewed. Through this proposed framework, it is possible to create historically and physically accurate models based on input available today. The approach is an extension to the established Predictive Rendering pipeline by introducing a historical comparison component. Third, a novel method to preview appropriately lit virtual environments is presented. The method is particularly useful for CH rendition, extending Image-based Lighting to employ empirically captured illumination to relight interior CH scenes. It is intended as a fast high-quality preview method for CH models before a high-quality render is initiated, therefore also making it useful in a Predictive Rendering context. Finally, a study on uses of High Dynamic Range (HDR) Imaging specifically for CH documentation and display purposes is also presented. This includes the use of a novel prototype camera to illustrate a proof-of-concept on how to document vast dynamic ranges of light based on the needs of CH research using HDR video.
|
128 |
Network routing optimisation and effective multimedia transmission to enhance QoS in communication networksKusetoğulları, Hüseyin January 2012 (has links)
With the increased usage of communication services in networks, finding routes for reliable transmission and providing effective multimedia communication have become very challenging problems. This has been a strong motivation to examine and develop methods and techniques to find routing paths efficiently and to provide effective multimedia communication. This thesis is mainly concerned with designing, implementing and adapting intelligent algorithms to solve the computational complexity of network routing problems and testing the performance of intelligent algorithms’ applications. It also introduces hybrid algorithms which are developed by using the similarities of genetic algorithm (GA) and particle swarm optimization (PSO) intelligent systems algorithms. Furthermore, it examines the design of a new encoding/decoding method to offer a solution for the problem of unachievable multimedia information in multimedia multicast networks. The techniques presented and developed within the thesis aim to provide maximum utilization of network resources for handling communication problems. This thesis first proposes GA and PSO implementations which are adapted to solve the single and multi-objective functions in network routing problems. To offer solutions for network routing problems, binary variable-length and priority based encoding methods are used in intelligent algorithms to construct valid paths or potential solutions. The performance of generation operators in GA and PSO is examined and analyzed by solving the various shortest path routing problems and it is shown that the performance of algorithms varies based on the operators selected. Moreover, a hybrid algorithm is developed based on the lack of search capability of intelligent algorithms and implemented to solve the single objective function. The proposed method uses a strategy of sharing information between GA and PSO to achieve significant performance enhancement to solve routing optimization problems. The simulation results demonstrate the efficiency of the hybrid algorithm by optimizing the shortest path routing problem. Furthermore, intelligent algorithms are implemented to solve a multi-objective function which involves more constraints of resources in communication networks. The algorithms are adapted to find the multi-optimal paths to provide effective multimedia communication in lossy networks. The simulation results verify that the implemented algorithms are shown as efficient and accurate methods to solve the multi-objective function and find multi-optimal paths to deliver multimedia packets in lossy networks. Furthermore, the thesis proposes a new encoding/decoding method to maximize throughput in multimedia multicast networks. The proposed method is combined with two most used Multiple Description Coding (MDC) methods. The utilization of the proposed method is discussed by comparing two the MDC methods. Through analyzing the simulation results using these intelligent systems algorithms, it has been shown that feasible solutions can be obtained by optimizing complex network problems. Moreover, the methods proposed and developed, which are hybrid algorithms and the encoding/decoding method also demonstrate their efficiency and effectiveness as compared with other techniques.
|
129 |
Multiresolution neural networks for image edge detection and restorationYang, Horng-Chang January 1994 (has links)
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
|
130 |
Application of software engineering tools and techniques to PLC programming : innovation reportWaters, Matthew January 2009 (has links)
The software engineering tools and techniques available for use in traditional information systems industries are far more advanced than in the manufacturing and production industries. Consequently there is a paucity of ladder logic programming support tools. These tools can be used to improve the way in which ladder logic programs are written, to increase the quality and robustness of the code produced and minimise the risk of software related downtime. To establish current practice and to ascertain the needs of industry a literature review and a series of interviews with industrial automation professionals were conducted. Two opportunities for radical improvement were identified; a tool to measure software metrics for code written in ladder logic and a tool to detect cloned code within a ladder program. Software metrics quantify various aspects of code and can be used to assess code quality, measure programmer productivity, identify weak code and develop accurate costing models with respect to code. They are quicker, easier and cheaper than alternative code reviewing strategies such as peer review and allow organisations to make evidence based decisions with respect to code. Code clones occur because reuse of copied and pasted code increases programmer productivity in the short term, but make programs artificially large and can spread bugs. Cloned code can be removed with no loss of functionality, dramatically reducing the the size of a program. To implement these tools, a compiler front end for ladder logic was first constructed. This included a lexer with 24 lexical modes, 71 macro definitions and 663 token definitions as well as a 729 grammar rule parser. The software metrics tool and clone detection tool perform analyses on an abstract sytax tree, the output from the compiler. The tools have been designed to be as user friendly as possible. Metrics results are compiled in XML reports that can be imported into spreadsheet applications, and the clone detector generates easily navigable HTML reports for each clone as well as an index file of all clones that contains hyperlinks to all clone reports. Both tools were demonstrated by analysing real factory code from a Jaguar Land Rover body in white line. The metrics tool analysed over 1.5 million lines of ladder logic code contained within 23 files and 8466 routines. The results identified those routines that are abnormally complex in addition to routines that are excessively large. These routines are a likely source of problems in future and action to improve them immediately is recommended. The clone detector analysed 59K lines from a manufacturing cell. The results of this analysis proved that the code could be reduced in volume by 43.9% and found previously undetected bugs. By removing clones for all factory code, the code would be reduced in size by so much that it could run on as much as 25% fewer PLCs, yielding a significant saving on hardware costs alone. De-cloned code is also easier to make modifications to, so this process goes some way towards future-proofing the code.
|
Page generated in 0.1071 seconds