• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Evaluation of Shortest Path Query Algorithm in Spatial Databases

Lim, Heechul January 2003 (has links)
Many variations of algorithms for finding the shortest path in a large graph have been introduced recently due to the needs of applications like the Geographic Information System (GIS) or Intelligent Transportation System (ITS). The primary subjects of those algorithms are materialization and hierarchical path views. Some studies focus on the materialization and sacrifice the pre-computational costs and storage costs for faster computation of a query. Other studies focus on the shortest-path algorithm, which has less pre-computation and storage but takes more time to compute the shortest path. The main objective of this thesis is to accelerate the computation time for the shortest-path queries while keeping the degree of materialization as low as possible. This thesis explores two different categories: 1) the reduction of the I/O-costs for multiple queries, and 2) the reduction of search spaces in a graph. The thesis proposes two simple algorithms to reduce the I/O-costs, especially for multiple queries. To tackle the problem of reducing search spaces, we give two different levels of materializations, namely, the <i>boundary set distance matrix</i> and <i>x-Hop sketch graph</i>, both of which materialize the shortest-path view of the boundary nodes in a partitioned graph. Our experiments show that a combination of the suggested solutions for 1) and 2) performs better than the original Disk-based SP algorithm [7], on which our work is based, and requires much less storage than <i>HEPV</i> [3].
572

A method for the architectural design of distributed control systems for large, civil jet engines : a systems engineering approach

Bourne, Duncan January 2011 (has links)
The design of distributed control systems (DCSs) for large, civil gas turbine engines is a complex architectural challenge. To date, the majority of research into DCSs has focused on the contributing technologies and high temperature electronics rather than the architecture of the system itself. This thesis proposes a method for the architectural design of distributed systems using a genetic algorithm to generate, evaluate and refine designs. The proposed designs are analysed for their architectural quality, lifecycle value and commercial benefit. The method is presented along with results proving the concept. Whilst the method described here is applied exclusively to Distributed Control System (DCS) for jet engines, the principles and methods could be adapted for a broad range of complex systems.
573

Portfolio optimisation with transaction cost

Woodside-Oriakhi, Maria January 2011 (has links)
Portfolio selection is an example of decision making under conditions of uncertainty. In the face of an unknown future, fund managers make complex financial choices based on the investors perceptions and preferences towards risk and return. Since the seminal work of Markowitz, many studies have been published using his mean-variance (MV) model as a basis. These mathematical models of investor attitudes and asset return dynamics aid in the portfolio selection process. In this thesis we extend the MV model to include the cardinality constraints which limit the number of assets held in the portfolio and bounds on the proportion of an asset held (if any is held). We present our formulation based on the Markowitz MV model for rebalancing an existing portfolio subject to both fixed and variable transaction cost (the fee associated with trading). We determine and demonstrate the differences that arise in the shape of the trading portfolio and efficient frontiers when subject to non-cardinality and cardinality constrained transaction cost models. We apply our flexible heuristic algorithms of genetic algorithm, tabu search and simulated annealing to both the cardinality constrained and transaction cost models to solve problems using data from seven real world market indices. We show that by incorporating optimization into the generation of valid portfolios leads to good quality solutions in acceptable computational time. We illustrate this on problems from literature as well as on our own larger data sets.
574

On the complexity of matrix multiplication

Stothers, Andrew James January 2010 (has links)
The evaluation of the product of two matrices can be very computationally expensive. The multiplication of two n×n matrices, using the “default” algorithm can take O(n3) field operations in the underlying field k. It is therefore desirable to find algorithms to reduce the “cost” of multiplying two matrices together. If multiplication of two n × n matrices can be obtained in O(nα) operations, the least upper bound for α is called the exponent of matrix multiplication and is denoted by ω. A bound for ω < 3 was found in 1968 by Strassen in his algorithm. He found that multiplication of two 2 × 2 matrices could be obtained in 7 multiplications in the underlying field k, as opposed to the 8 required to do the same multiplication previously. Using recursion, we are able to show that ω ≤ log2 7 < 2.8074, which is better than the value of 3 we had previously. In chapter 1, we look at various techniques that have been found for reducing ω. These include Pan’s Trilinear Aggregation, Bini’s Border Rank and Sch¨onhage’s Asymptotic Sum inequality. In chapter 2, we look in detail at the current best estimate of ω found by Coppersmith and Winograd. We also propose a different method of evaluating the “value” of trilinear forms. Chapters 3 and 4 build on the work of Coppersmith and Winograd and examine how cubing and raising to the fourth power of Coppersmith and Winograd’s “complicated” algorithm affect the value of ω, if at all. Finally, in chapter 5, we look at the Group-Theoretic context proposed by Cohn and Umans, and see how we can derive some of Coppersmith and Winograd’s values using this method, as well as showing how working in this context can perhaps be more conducive to showing ω = 2.
575

Adaptive Java optimisation using machine learning techniques

Long, Shun January 2004 (has links)
There is a continuing demand for higher performance, particularly in the area of scientific and engineering computation. In order to achieve high performance in the context of frequent hardware upgrading, software must be adaptable for portable performance. What is required is an optimising compiler that evolves and adapts itself to environmental change without sacrificing performance. Java has emerged as a dominant programming language widely used in a variety of application areas. However, its architectural independant design means that it is frequently unable to deliver high performance especially when compared to other imperative languages such as Fortran and C/C++. This thesis presents a language- and architecture-independant approach to achieve portable high performance. It uses the mapping notation introduced in the Unified Transformation Framework to specify a large optimisation space. A heuristic random search algorithm is introduced to explore this space in a feedback-directed iterative optimisation manner. It is then extended using a machine learning approach which enables the compiler to learn from its previous optimisations and apply the knowledge when necessary. Both the heuristic random search algorithm and the learning optimisation approach are implemented in a prototype Adaptive Optimisation Framework for Java (AOF-Java). The experimental results show that the heuristic random search algorithm can find, within a relatively small number of atttempts, good points in the large optimisation space. In addition, the learning optimisation approach is capable of finding good transformations for a given program from its prior experience with other programs.
576

On the configuration of arrays of floating wave energy converters

Child, Benjamin Frederick Martin January 2011 (has links)
In this thesis, certain issues relating to a number of wave energy absorbers operating in the same vicinity are investigated. Specifically, arrangements of the devices within such an array are sought, such that beneficial hydrodynamic interference between members is exploited and unwanted effects mitigated. Arrays of `point absorber' devices as well as converters with multiple closely spaced floats are modelled and a frequency domain hydrodynamic solution derived. This is implemented as efficient computer code, capable of producing the full linear wave theory solution to any desired degree of accuracy. Furthermore, the results are verified against output from the boundary element code WAMIT. Initially, detailed analysis of an isolated absorber is conducted, with motion responses, forces, power output and velocity potentials at the free surface computed for a range of different device specifications. Elementary examples of arrays are then used to demonstrate the influence of factors such as device separation, wave heading angle, number of devices and array configuration upon collective performance. Subsequently, the power output from an array of five devices is optimised with respect to its layout, using two different routines. The first is a new heuristic approach, named the Parabolic Intersection (PI) method, that efficiently creates array con figurations using only basic computations. The second is a Genetic Algorithm (GA) with a novel `crossover' operator. Each method is applied to maximise the output at a given regular wave frequency and direction under two different power take-off regimes and also to minimise power in a third, cautionary example. The resulting arrays are then analysed and the optimisation procedures themselves evaluated. Finally, the effects of irregular seas on array interactions are investigated. The configurations that were optimised for regular wave climates are assessed in a range of irregular sea-states. The GA is then used once more to create optimal array layouts for each of these seas. The characteristics of the arrays are subsequently examined and the influence of certain spectral parameters on the optimal solutions considered. The optimisation procedures were both found to be effective, with the GA marginally outperforming the PI method in all cases. Significant positive and negative modifications to the power output were observed in the arrays optimised in regular waves, although the effects weakened when the same arrays were subjected to irregular sea-states. However, arrays optimised specifically in irregular seas exhibited differences in net power output equivalent to over half that produced from the same number of devices in isolation.
577

Real time evolutionary algorithms in robotic neural control systems

Jagadeesan, Ananda Prasanna January 2006 (has links)
This thesis describes the use of a Real-Time Evolutionary Algorithm (RTEA) to optimise an Artificial Neural Network (ANN) on-line (in this context “on-line” means while it is in use). Traditionally, Evolutionary Algorithms (Genetic Algorithms, Evolutionary Strategies and Evolutionary Programming) have been used to train networks before use - that is “off-line,” as have other learning systems like Back-Propagation and Simulated Annealing. However, this means that the network cannot react to new situations (which were not in its original training set). The system outlined here uses a Simulated Legged Robot as a test-bed and allows it to adapt to a changing Fitness function. An example of this in reality would be a robot walking from a solid surface onto an unknown surface (which might be, for example, rock or sand) while optimising its controlling network in real-time, to adjust its locomotive gait, accordingly. The project initially developed a Central Pattern Generator (CPG) for a Bipedal Robot and used this to explore the basic characteristics of RTEA. The system was then developed to operate on a Quadruped Robot and a test regime set up which provided thousands of real-environment like situations to test the RTEA’s ability to control the robot. The programming for the system was done using Borland C++ Builder and no commercial simulation software was used. Through this means, the Evolutionary Operators of the RTEA were examined and their real-time performance evaluated. The results demonstrate that a RTEA can be used successfully to optimise an ANN in real-time. They also show the importance of Neural Functionality and Network Topology in such systems and new models of both neurons and networks were developed as part of the project. Finally, recommendations for a working system are given and other applications reviewed.
578

An Integrated Approach to Determine Phenomenological Equations in Metallic Systems

Ghamarian, Iman 12 1900 (has links)
It is highly desirable to be able to make predictions of properties in metallic materials based upon the composition of the material and the microstructure. Unfortunately, the complexity of real, multi-component, multi-phase engineering alloys makes the provision of constituent-based (i.e., composition or microstructure) phenomenological equations extremely difficult. Due to these difficulties, qualitative predictions are frequently used to study the influence of microstructure or composition on the properties. Neural networks were used as a tool to get a quantitative model from a database. However, the developed model is not a phenomenological model. In this study, a new method based upon the integration of three separate modeling approaches, specifically artificial neural networks, genetic algorithms, and monte carlo was proposed. These three methods, when coupled in the manner described in this study, allows for the extraction of phenomenological equations with a concurrent analysis of uncertainty. This approach has been applied to a multi-component, multi-phase microstructure exhibiting phases with varying spatial and morphological distributions. Specifically, this approach has been applied to derive a phenomenological equation for the prediction of yield strength in a+b processed Ti-6-4. The equation is consistent with not only the current dataset but also, where available, the limited information regarding certain parameters such as intrinsic yield strength of pure hexagonal close-packed alpha titanium.
579

Solving a highly constrained multi-level container loading problem from practice

Olsson, Jonas January 2017 (has links)
The container loading problem considered in this thesis is to determine placements of a set of packages within one or multiple shipping containers. Smaller packages are consolidated on pallets prior to being loaded in the shipping containers together with larger packages. There are multiple objectives which may be summarized as fitting all the packages while achieving good stability of the cargo as well as the shipping containers themselves. According to recent literature reviews, previous research in the field have to large extent been neglecting issues relevant in practice. Our real-world application was developed for the industrial company Atlas Copco to be used for sea container shipments at their Distribution Center (DC) in Texas, USA. Hence all applicable practical constraints faced by the DC operators had to be treated properly. A high variety in sizes, weights and other attributes such as stackability among packages added complexity to an already challenging combinatorial problem. Inspired by how the DC operators plan and perform loading manually, the batch concept was developed, which refers to grouping of boxes based on their characteristics and solving subproblems in terms of partial load plans. In each batch, an extensive placement heuristic and a load plan evaluation run iteratively, guided by a Genetic Algorithm (GA). In the placement heuristic, potential placements are evaluated using a scoring function considering aspects of the current situation, such as space utilization, horizontal support and heavier boxes closer to the floor. The scoring function is weighted by coefficients corresponding to the chromosomes of an individual in the GA population. Consequently, the fitness value of an individual in the GA population is the rating of a load plan. The loading optimization software has been tested and successfully implemented at the DC in Texas. The software has been proven capable of generating satisfactory load plans within acceptable computation times, which has resulted in reduced uncertainty and labor usage in the loading process. Analysis using real sea container shipments shows that the GA is able to tune the scoring coefficients to suit the particular problem instance being solved.
580

Testing Safety Critical Avionics Software Using LBTest

Stenlund, Sebastian January 2016 (has links)
A case study for the tool LBTest illustrating benets and limitations of the tool along the terms of usability, results and costs. The study shows the use of learning based testing on a safety critical application in the avionics industry. While requiring the user to have the oretical knowledge of the tools inner workings, the process of using the tool has benefits in terms of requirement analysis and the possibility of finding design and implementation errors in both the early and late stages of development

Page generated in 0.0995 seconds