111 |
Multi-objective optimisation using the Bees AlgorithmLee, Ji Young January 2010 (has links)
In the real world, there are many problems requiring the best solution to satisfy numerous objectives and therefore a need for suitable Multi-Objective Optimisation methods. Various Multi-Objective solvers have been developed recently. The classical method is easily implemented but requires repetitive program runs and does not generate a true "Pareto" optimal set. Intelligent methods are increasingly employed, especially population-based optimisation methods to generate the Pareto front in a single run. The Bees Algorithm is a newly developed population-based optimisation algorithm which has been verified in many fields. However, it is limited to solving single optimisation problems. To apply the Bees Algorithm to a Multi- Objective Optimisation Problem, either the problem is converted to single objective optimisation or the Bees Algorithm modified to function as a Multi- Objective solver. To make a problem into a single objective one, the weighted sum method is employed. However, due to failings of this classical method, a new approach is developed to generate a true Pareto front by a single run. This work also introduces an enhanced Bees Algorithm. A new dynamic selection procedure improves the Bees Algorithm by reducing the number of parameters and new neighbourhood search methods are adopted to optimise the Pareto front. The enhanced algorithm has been tested on Multi-Objective benchmark functions and the classical Environmental/Economic power Dispatch Problem (EEDP). The results obtained compare well with those produced by other population- based algorithms. Due to recent trends in renewable energy systems, it is necessary to have a new model of the EEDP. Therefore, the EEDP was amended in conjunction with the Bees Algorithm to identify the best design in terms of energy performance and carbon emission reduction by adopting zero and low carbon technologies. This computer-based tool supports the decision making process in the design of a Low-Carbon City.
|
112 |
Quality of service management in service-oriented gridsAl-Ali, Rashid J. January 2005 (has links)
Grid computing provides a robust paradigm for aggregating disparate resources in a secure and controlled environment. The emerging grid infrastructure gives rise to a class of scientific applications and services in support of collaborative and distributed resource-sharing requirements, as part of teleimmersion, visualization and simulation services. Because such applications operate in a collaborative mode, data must be stored, processed and delivered in a timely manner. Such classes of applications have collaborative and distributed resource-sharing requirements, and have stringent real-time constraints and quality-of-service (QoS) requirements. A QoS management approach is therefore essential to orchestrate and guarantee the interaction among such applications in a distributed computing environment. Grid architectures require an underpinning of QoS support to manage complex computation-intensive and data-intensive applications, as current grid middleware solutions lack QoS provision. QoS guarantees in the grid context have, however, not been given the importance they merit. To enhance its functionality, a computational grid must be overlaid with an advanced QoS architecture to best execute those applications with real-time constraints. This thesis reports on the design and implementation of a software framework, called Grid QoS Management (G-QoSm). G-QoSm incorporates a new QoS management model and provides a service-oriented QoS management approach that supports the Open Grid Service Architecture. Its novel features include grid-service discovery based on QoS attributes, immediate and advance resource reservation, service execution with QoS constraints, and techniques for QoS adaptation to compensate for resource degradation, and to optimise resource allocation while maintaining a service level agreement. The benefits of G-QoSm are demonstrated by prototype test-beds that integrate scientific grid applications and simulate grid data-transfer applications. Results show that the grid application and the data-transfer simulation have better performance when used with the proposed QoS approach. QoS abstractions are presented for building QoS-aware applications, in the context of service-oriented grids. These abstractions are application programming interfaces to facilitate application developers utilising the proposed QoS management solution.
|
113 |
Automatic portal generation based on XML workflow descriptionLu, Dashan January 2010 (has links)
This dissertation investigates the automatic generation of computing portals based on XML workflow descriptions. To this end, a software system is designed, implemented and evaluated that allows end-users to build their own customized portal for managing and executing distributed scientific and engineering computations in a service-oriented environment. The whole process of the computation is represented as a data-driven workflow. The portal technique provides a user-friendly problem-solving environment that addresses job assignment, job submission and job feedback. An advantage of this approach is that the complexity of the workflow execution in the distributed environment is hidden from the user. However, the manual development and configuration of the application portal requires considerable expertise in web portal techniques, which most scientific end-users do not have. This dissertation address this problem by describing a tool chain consisting of three tools to achieve automatic portal generation and configuration. In addition, this dissertation presents a mapping of each element of WSDL to the UDDI data model, the conversion from the data-flow workflow to control-flow workflow by using XSLT, an implementation of a drag-and-drop visual programming environment for the generation of a workflow skeleton, and a methodology for the automatic layout of portlets in a portal framework.
|
114 |
Visual programming environments for multi-disciplinary distributed applicationsShields, Matthew S. January 2004 (has links)
A Problem Solving Environment is a complete, integrated computing environment for composing, compiling and running applications in a specific problem area or domain. A Visual Programming Environment is one possible front end to a problem solving environment. It applies the visual programming paradigms of "point and click" and "drag and drop", via a Graphical User Interface, to the various constituent components that are used to assemble an application. The aim of the problem solving environment presented here is to provide the ability to build up scientific applications by connecting, or plugging, software components together and to provide an intuitive way to construct scientific applications. Problem solving environments promise a totally new user environment for computational scientists and engineers. In this new paradigm, individual programs combined to solve a problem in their given area of expertise, are wrapped as components within an integrated system that is both powerful and easy to use. This thesis aims to address: problems in code reuse the combination of different codes in new ways and problems with underlying system familiarity and distribution. This is achieved by abstracting application composition using visual programming techniques. The work here focuses on a prototype environment using a number of demonstration problems from multi-disciplinary problem domains to illustrate some of the main difficulties in building problem solving environments and some possible solutions. A novel approach to code wrapping, component definition and application specification is shown, together with timing and usage comparisons that illustrate that this approach can be used successfully to help scientists and engineers in their daily work.
|
115 |
GAPS : a hybridised framework applied to vehicle routing problemsMorgan, Matthew J. W. January 2008 (has links)
In this thesis we consider two combinatorial optimisation problems; the Capacitated Vehicle Routing Problem (CVRP) and the Capacitated Arc Routing Problem (CARP). In the CVRP, the objective is to find a set of routes for a homogenous fleet of vehicles, which must service a set of customers from a central depot. In contrast, the CARP requires a set of routes for a fleet of vehicles to service a set of customers at the street level of an intercity network. After a comprehensive discussion of the existing exact and heuristic algorithmic techniques presented in the literature for these problems, computational experiments to provide a benchmark comparison of a subset of algorithmic implementations for these methods are presented for both the CVRP and CARP, run against a series of dataset instances from the literature. All dataset instances are re-catalogued using a standard format to overcome the difficulties of the different naming schemes and duplication of instances that exist between different sources. We then present a framework, which we shall call Genetic Algorithm with Perturbation Scheme (GAPS), to solve a number of combinatorial optimisation problems. The idea is to use a genetic algorithm as a container framework in conjunction with a perturbation or weight coding scheme. These schemes make alterations to the underlying input data within a problem instance, after which the changed data is fed into a standard problem specific heuristic and the solution obtained decoded to give a true solution cost using the original unaltered instance data. We first present GAPS in a generic context, using the Travelling Salesman Problem (TSP) as an example and then provide details of the specific application of GAPS to both the CVRP and CARP. Computational experiments on a large set of problem instances from the literature are presented and comparisons with the results achieved by the current state of the art algorithmic approaches for both problems are given, highlighting the robustness and effectiveness of the GAPS framework.
|
116 |
Provenance support for service-based infrastructureRajbhandari, Shrija January 2007 (has links)
Service-based architectures represent the next evolutionary step in the development of e-science, namely, the transformation of the Internet from a commercial marketplace to a mechanism for sharing multidisciplinary scientific resources. Although scientists in many disciplines have become increasingly reliant on distributed computing technologies for data processing and dissemination, the record of the processing history and origin of a data product, that is its data provenance, is often nonexistent, incomplete or impossible to recover by potential users. This thesis aims to address data provenance issues in service-based environments, particularly to answer how a scientist who performs a workflow execution in such an environment can (1) document the data provenance for a data item created by the execution, and (2) use the provenance documentation as a recipe to re-execute the workflow. This thesis pro poses a provenance model for delivering data provenance support in a service-based environment. Through the use of an example scenario of a scientific workflow in the Astrophysics domain, we explore and identify components of the provenance model. The provenance model proposes a technique to collect and record data provenance for service-based workflow executions. The technique facilitates the collection of data provenance of workflow execution at runtime. In order to record the collected data provenance, the thesis also proposes a specification to represent provenance to de scribe the processing history whereby a piece of data was derived. The thesis also proposes query interfaces that allow recorded provenance to be queried, has formulated a technique to construct provenance graphs, and supports the re-execution of past workflows. The provenance representation specification, the collection technique, and the query interfaces have been used to implement a prototype system to demonstrate the proposed model. The thesis also experimentally evaluates the scalability of the components implemented.
|
117 |
Querying distributed heterogeneous structured and semi-structured data sourcesAl-Wasil, Fahad M. January 2007 (has links)
The continuing growth and widespread popularity of the internet means that the collection of useful data available for public access is rapidly increasing both in number and size. These data are spread over distributed heterogeneous data sources like traditional databases or sources of various forms containing unstructured and semi-structured data. Obviously, the value of these data sources would in many cases be greatly enhanced if the data they contain could be combined and queried in a uniform manner. The research work reported in this dissertation is concerned with querying and integrating a multiplicity of distributed heterogeneous structured data residing in relational databases and semi-structured data held in well- formed XML documents produced by internet applications or human- coded. In particular, we have addressed the problems of: (1) specifying the mappings between a global schema and the local data sources' schemas, and resolving the heterogeneity which can occur between data models, schemas or schema concepts (2) processing queries that are expressed on a global schema into local queries. We have proposed an approach to combine and query the data sources through a mediation layer. Such a layer is intended to establish and evolve an XML Metadata Knowledge Base (XMKB) incrementally which assists the Query Processor in mediating between user queries posed over the global schema and the queries on the underlying distributed heterogeneous data sources. It translates such queries into sub-queries -called local queries- which are appropriate to each local data source. The XMKB is built in a bottom-up fashion by extracting and merging incrementally the metadata of the data sources. It holds the data source's information (names, types and locations), descriptions of the mappings between the global schema and the participating data source schemas, and function names for handling semantic and structural discrepancies between the representations. To demonstrate our research, we have designed and implemented a prototype system called SISSD (System to Integrate Structured and Semi- structured Databases). The system automatically creates a GUI tool for meta-users (who do the metadata integration) which they use to describe mappings between the global schema and local data source schemas. These mappings are used to produce the XMKB. The SISSD allows the translation of user queries into sub-queries fitting each participating data source, by exploiting the mapping information stored in the XMKB. The major results of the thesis are: (1) an approach that facilitates building structured and semi-structured data integration systems (2) a method for generating mappings between a global and local schemas' paths, and resolving the conflicts caused by the heterogeneity of the data sources such as naming, structural, and semantic conflicts which, may occur between the schemas (3) a method for translating queries in terms of a global schema into sub-queries in terms of local schemas. Hence, the presented approach shows that: (a) mapping of the schemas' paths can only be partially automated, since the logical heterogeneity problems need to be resolved by human judgment based on the application requirements (b) querying distributed heterogeneous structured and semi-structured data sources is possible.
|
118 |
Using 3d Facial Motion for Biometric IdentificationBenedikt, Lanthao January 2009 (has links)
No description available.
|
119 |
Asynchronous queue machines with explicit forwardingBeringer, Lennart January 2002 (has links)
We consider computational models motivated by processors which exhibit architectural asynchrony and allow operands to bypass the register bank using a forwarding mechanism. We analyse the interaction between asynchrony and forwarding, derive constraints on the usage of forwarding for various models of operation, and study consequences for compilers targeting such processors. Our approach to reasoning about processor behaviour is programming language based. We introduce an assembly language in which forwarding is explicitly visible. Operational models corresponding to processor abstractions are expressed as structural operational semantics for this language. The benefits of this approach for defining program execution and for relating processor models formally are demonstrated. Furthermore, we study the restrictions on the class of admissible programs for each operational model. Under our programming language perspective, these constraints are expressed as static semantics and formalised as type systems. Suitability of forwarding schemes for particular models of operation follows from soundness and completeness results which are established by standard programming language proof techniques. Well-typed programs are structurally correct and cannot experience run-time errors due to ill usage of the forwarding mechanism. Exposing asynchrony and forwarding to the programmer allows a compiler to optimise forwarding behaviour by scheduling operands. We show how program analysis can decide which values to communicate through registers and which ones to forward. The analysis is expressed as a dataflow problem for an intermediate language and is proven sound with respect to a dynamic semantics. Solutions to the dataflow equations yield translations into the assembly language which are functionally faithful to the operational semantics and also structure-preserving as resulting programs are well-typed. The theoretical development of the translation is complemented by a prototypical implementation. Experimental results are included for a symbolic conversion of Java virtual machine code into the intermediate language, indicating that application programs contain sufficient opportunities for forwarding to make our approach viable. In conclusion, we demonstrate the benefits of a programming language based view for reasoning about programs targeting architectures with asynchrony and forwarding.
|
120 |
Program comprehension through sonificationBerman, Lewis Irwin January 2011 (has links)
Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired.
|
Page generated in 0.0138 seconds