• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Algorithms for coalition formation in multi-agent systems

Rahwan, Talal January 2007 (has links)
Coalition formation is a fundamental form of interaction that allows the creation of coherent groupings of distinct, autonomous, agents in order to efficiently achieve their individual or collective goals. Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of determining which of the possible coalitions to form in order to achieve some goal. This usually requires calculating a value for every possible coalition, known as the coalition value, which indicates how beneficial that coalition would be if it was formed. Now since the number of possible coalitions grows exponentially with the number of agents involved, then, instead of having a single agent calculate all these values, it would be more efficient to distribute this calculation among all agents, thus, exploiting all computational resources that are available to the system, and preventing the existence of a single point of failure. Against this background, we develop a novel algorithm for distributing the value calculation among the cooperative agents. Specifically, by using our algorithm, each agent is assigned some part of the calculation such that the agents' shares are exhaustive and disjoint. Moreover, the algorithm is decentralized, requires no communication between the agents, has minimal memory requirements, and can reflect variations in the computational speeds of the agents. To evaluate the effectiveness of our algorithm we compare it with the only other algorithm available in the literature for distributing the coalitional value calculations (due to Shehory and Kraus). This shows that for the case of 25 agents, the distribution process of our algorithm took less than 0.02% of the time, the values were calculated using 0.000006% of the memory, the calculation redundancy was reduced from 383229848 to 0, and the total number of bytes sent between the agents dropped from 1146989648 to 0. Note that for larger numbers of agents, these improvements become exponentially better. Once the coalitional values are calculated, the agents usually need to find a combination of coalitions in which every agent belongs to exactly one coalition, and by which the overall outcome of the system is maximized. This problem, which is widely known as the coalition structure generation problem, is extremely challenging due to the number of possible combinations which grows very quickly as the number of agents increases, making it impossible to go through the entire search space, even for small numbers of agents. Given this, many algorithms have been proposed to solve this problem using different techniques, ranging from dynamic programming, to integer programming, to stochastic search, all of which suffer from major limitations relating to execution time, solution quality, and memory requirements. With this in mind, we develop a novel, anytime algorithm for solving the coalition structure generation problem. Specifically, the algorithm can generate solutions by partitioning the space of all potential coalition structures into sub-spaces containing coalition structures that are similar, according to some criterion, such that these sub-spaces can be pruned by identifying their bounds. Using this representation, the algorithm can then search through the selected sub-space(s) very efficiently using a branch-and-bound technique. We empirically show that we are able to find solutions that are optimal in 0.082% of the time required by the fastest available algorithm in the literature (for 27 agents), and that is using only 33% of the memory required by that algorithm. Moreover, our algorithm is the first to be able to solve the coalition structure generation problem for numbers of agents bigger than 27 in reasonable time (less than 90 minutes for 30 agents as opposed to around 2 months for the current state of the art). The algorithm is anytime, and if interrupted before it would have normally terminated, it can still provide a solution that is guaranteed to be within a bound from the optimal one. Moreover, the guarantees we provide on the quality of the solution are significantly better than those provided by the previous state of the art algorithms designed for this purpose. For example, given 21 agents, and after only 0.0000002% of the search space has been searched, our algorithm usually guarantees that the solution quality is no worse than 91% of optimal value, while previous algorithms only guarantees 9.52%. Moreover, our guarantee usually reaches 100% after 0.0000019% of the space has been searched, while the guarantee provided by other algorithms can never go beyond 50% until the whole space has been searched. Again note that these improvements become exponentially better given larger numbers of agents.
322

The finite element method in underwater acoustics

Pack, Peter Michael Walter January 1986 (has links)
A Finite Element Method (FEM) is developed to calculate rotationally symmetric acoustic propagation over short range intervals (0-5 km) in shallow oceans (0-200 m deep) at low frequencies (0-50 Hz). The method allows full two-way wave propagation in range dependent environments and includes coupling to a full elastic seabed. Numerical results from a computer program are presented for propagation upslope, downslope, over seamounts and across trenches in the seabed. The seabed is modelled as a pressure release surface, a fluid halfspace and an elastic, solid halfspace and the implications of each type of model are discussed. The halfspaces, being represented by a new set of infinite elements, are modelled without truncation. The results are presented primarily as plots of transmission loss against range for a fixed depth receiver. Subsidiary results show the effect of depth averaging the receiver location, and extract mode amplitude data to reveal the strength of mode coupling and backscatter in different environments.
323

Evaluating reinforcement learning for game theory application learning to price airline seats under competition

Collins, Andrew January 2009 (has links)
Applied Game Theory has been criticised for not being able to model real decision making situations. A game's sensitive nature and the difficultly in determining the utility payoff functions make it hard for a decision maker to rely upon any game theoretic results. Therefore the models tend to be simple due to the complexity of solving them (i.e. finding the equilibrium). In recent years, due to the increases of computing power, different computer modelling techniques have been applied in Game Theory. A major example is Artificial Intelligence methods e.g. Genetic Algorithms, Neural Networks and Reinforcement Learning (RL). These techniques allow the modeller to incorporate Game Theory within their models (or simulation) without necessarily knowing the optimal solution. After a warm up period of repeated episodes is run, the model learns to play the game well (though not necessarily optimally). This is a form of simulation-optimization. The objective of the research is to investigate the practical usage of RL within a simple sequential stochastic airline seat pricing game. Different forms of RL are considered and compared to the optimal policy, which is found using standard dynamic programming techniques. The airline game and RL methods displays various interesting phenomena, which are also discussed. For completeness, convergence proofs for the RL algorithms were constructed.
324

Computer assisted tracking of university student writing in English as a foreign language

Alghamdi, Fatimah M. A. January 2010 (has links)
The study tracked development along university levels in writing in English as a foreign language of students of two disciplines: English Language and Literature, and Computer Science. Informed by the cognitive process theory of writing, other theoretical accounts of development in writing and findings of relevant literature, the study set out to test hypothesized development in fluency, revision behaviour, writers‟ awareness and concerns and text quality in the writing of university students. Moreover, the study aimed to find out if students from the two majors demonstrate different developmental patterns in terms of these variables; and if variation in text quality can be related to writing process and awareness. The study utilized a computer logging program (ScriptLog) as the main recording, observing and playback research tool; elicited responses to immediate recall questions; and obtained independent text assessment. It also employed stimulated recall procedure to get a closer look at a small proportion of individual writing sessions. Quantitative data analysis revealed that along the university levels English majors demonstrated systematic development in their writing process and product, with progressively increased fluency, higher-level and more global revision orientation, and better awareness of the demands of task and audience. They also exhibited considerable and consistent improvement in text quality. Computer Science students, on the other hand, displayed a different pattern. In their fourth level there was a notable increase in the rate of production and the proportion of conceptual revisions, but a significant decrease in text quality compared with their three-semester juniors. In their eighth semester, they demonstrated improvement but remained in a lesser position than their English-major peers in fluency measures and text quality. These findings assert the significance of formal L2 knowledge in assisting automatic access to the mental linguistic repertoire and reducing concerns over local and surface-level linguistic details; and they stress the importance of continued formal facilitation of L2. In addition, a number of participants attended individual writing sessions wherein their writing activity was followed by stimulated recall interviews. A close investigation of participants‟ reports of their writing strategies and concerns asserts the trends found in the quantitative analysis. However the qualitative inquiry offers more insight into the development of university students. It appears that the tertiary academic experience has in the long run benefited both groups of writers. Senior participants of both majors were able to take authority of their texts. They acted less at surface and local levels and more at conceptual and global levels, moving information around and changing larger chunks of text in order to minimize ambiguity and respond to the demands of audience. They showed consideration and utilization of content knowledge they had acquired in their subject area.
325

Enhancements to global design optimization techniques

Sobester, A. January 2003 (has links)
Modern engineering design optimization relies to a large extent on computer simulations of physical phenomena. The computational cost of such high-fidelity physics-based analyses typically places a strict limit on the number of candidate designs that can be evaluated during the optimization process. The more global the scope of the search, the greater are the demands placed by this limited budget on the efficiency of the optimization algorithm. This thesis proposes a number of enhancements to two popular classes of global optimizers. First, we put forward a generic algorithm template that combines population-based stochastic global search techniques with local hillclimbers in a Lamarckian learning framework. We then test a specific implementation of this template on a simple aerodynamic design problem, where we also investigate the feasibility of using an adjoint flow-solver in this type of global optimization. In the second part of this work we look at optimizers based on low-cost global surrogate models of the objective function. We propose a heuristic that enables efficient parallelisation of such strategies (based on the expected improvement infill selection criterion). We then look at how the scope of surrogate-based optimizers can be controlled and how they can be set up for high efficiency.
326

Efficient global aerodynamic optimisation using expensive computational fluid dynamics simulations

Forrester, Alexander I. J. January 2004 (has links)
The expense of high fidelity computational fluid dynamics, in terms of time and amount of computing resources required, excludes such methods from the early stages of aircraft design. It is only in the early, conceptual, stage of aircraft development where a wide range of designs are considered and global, rather than local, optimisation can play a key role. This thesis deals with methods which may allow high cost computer simulations to be used within a global optimisation design process. The first half of the thesis concentrates on the use of surrogate modeling of the optimisation design space, which allows cheap approximations to be used in lieu of expensive computer simulations. The process is automated and present statistical methods are modified to accommodate problems associated with the simulation of fluid flow and uncertainty within an automated system. The re-interpolation of a regression model of noisy data is presented as a method of improving convergence towards a global optimum. The second half of the thesis develops methods of using partially converged computational fluid dynamics simulations within a surrogate modelling optimisation process. Significant time savings are made possible by reducing computational effort directed at producing a surrogate for regions of poor designs and concentrating resources on modelling regions of promising designs.
327

Manufacturing cost based methodologies for design optimisation

Rao, Abhijit R. January 2006 (has links)
The objective of this research is to develop a methodology for incorporating cost models based on manufacturing process information within multidisciplinary design optimisation problems. Although cost considerations are critical in product design and development, cost models are rarely used in opti¬mising designs mainly due to the inability in acquiring accurate manufacturing cost estimates in early design. In this thesis, we present a new technique for embedding manufacturing process knowledge within a modelling tool which can be utilised to provide accurate cost estimates in design optimisation applications. We use the proposed cost estimating technique for optimising the geometry of two components from a Rolls-Royce civil aircraft engine by designing a sequential workflow consisting of CAD, analysis and cost models along with optimisation algorithms within an integrated system. Initial results from the first component (which is treated as a model problem) show that significant cost savings as well as shape changes can be achieved by using an accurate cost model in the objective function. The second case study dealt with is the shape optimisation of the initial 2D profile of a high pressure turbine disc. We develop highly flexible geometry parameterisation schemes to accurately represent manufacturing, supplier and inspection constraints inherent in the cost model for the disc. Significant differences in the geometry are achieved when the design is optimised for low manufacturing cost as compared to traditional weight minimisation leading to the second part of this thesis that deals with the hypothesis that low volume and low cost are conflicting attributes. Multiobjective optimisation approaches are then utilised to generate a Pareto front of designs with optimum combinations of both objectives. We then proceed to list the obstacles which prevent a straightforward application of multiobjective techniques to sophisticated design problems and propose modifications which enhance the quality of results achieved. Finally, a flowchart detailing the design optimisation framework used in this thesis is described for the benefit of future applications. We then conclude by stating the salient contributions of this work and interesting avenues of future research that can be pursued.
328

Application of genetic algorithms for irrigation water scheduling

Haq, Zia Ul January 2009 (has links)
A typical irrigation scheduling problem is one of preparing a schedule to service a group of outlets. These outlets may either be serviced sequentially or simultaneously. This problem has an analogy with the classical earliness/tardiness machine scheduling problems in operations research (OR). In previous published work integer programme were used to solve such problems; however, such scheduling problems belong to a class of combinatorial problems known to be computationally demanding (NP-hard). This is widely reported in OR. Hence integer programme can only be used to solve relatively small problems usually in a research environment where considerable computational resources and time can be allocated to solve a single schedule. For practical applications meta-heuristics such as genetic algorithms, simulated annealing or tabu search methods need to be used. However as reported in the literature, these need to be formulated carefully and tested thoroughly. This thesis demonstrates how arranged-demand irrigation scheduling problems can be correctly formulated and solved using genetic algorithms (GA). By interpreting arrangeddemand irrigation scheduling problems as single or multi-machine scheduling problems, the wealth of information accumulated over decades in OR is capitalized on. The objective is to schedule irrigation supplies as close as possible to the requested supply time of the farmers to provide a better level of service. This is in line with the concept of Service Oriented Management (SOM), described as the central goal of irrigation modernization in recent literature. This thesis also emphasizes the importance of rigorous evaluation of heuristics such as GA. First, a series of single machine models is presented that models the warabandi (rotation) type of irrigation distribution systems, where farmers are supplied water sequentially. Next, the multimachine models are presented which model the irrigation water distribution systems where several farmers may be supplied water simultaneously. Two types of multimachine models are defined. The simple multimachine models where all the farmers are supplied with identical discharges and the complex multimachine models where the farmers are allowed to demand different discharges. Two different approaches i.e. the stream tube approach and the time block approach are used to develop the multimachine models. These approaches are evaluated and compared to determine the suitability of either for the irrigation scheduling problems, which is one of the significant contributions of this thesis. The multimachine models are further enhanced by incorporating travel times which is an important part of the surface irrigation canal system and need to be taken into account when determining irrigation schedules. The models presented in this thesis are unique in many aspects. The potential of GA for a wide range of irrigation scheduling problems under arranged demand irrigation system is fully explored through a series of computational experiments.
329

Aerodynamic design optimization using flow feature parameterization

Barrett, Thomas Robin January 2007 (has links)
Design optimization methods using high-fidelity computational fluid dynamics simulations are becoming increasingly popular in the area of aerodynamic design, sustaining the desire to make these methods more computationally efficient. Such design strategies typically define the aerodynamic product using a parametric model of the geometry, but this can often require a large number of design variables, increasing the computational cost. This thesis proposes that a parametric model of aerodynamic flow features, rather than geometry, can be a parsimonious method of representing designs, giving a reduction in the number of design parameters required for optimization. The parameterization of flow features is coupled with inverse design, in order to recover the corresponding geometry. While an expensive analysis code is used in evaluating design performance, computational cost is reduced by using a low-fidelity code in the inverse design process. This newly presented method is demonstrated using four case studies in 2-D airfoil design, in which the parameterized flow feature is the surface pressure distribution, and two case studies for 3-D wing design, in which the spanwise loading distribution is parameterized. These strategies are consistently compared against a benchmark design search method which uses a conventional parameterization of the geometry. The two methods are described in detail, and their relative performance is analysed and discussed. The newly presented method is found to converge towards the optimum design significantly more quickly than the benchmark method, providing designs with greater performance for a given computational expense. A parameterization of flow features can generate designs with higher quality and detail than a geometry-based method of the same dimensionality.
330

Computer simulation of lipids and DNA using a coarse grain methodology

Chellapa, George January 2009 (has links)
The nucleus of the eukaryotic cell contains a large pool of lipids together with structural proteins and genomic DNA. The project aim was to develop simple and robust lipid and DNA models that will allow for these complex molecules to be mixed together in order to elucidate the possible interactions. The large percentage of lipids found within the nucleus makes it likely that they exist in aggregates, although the actual role and structure in which they exist is unknown. While there has not been substantial work done to model such interactions between lipids and DNA in order to better understand the interactions within the nucleus, a substantial body of work exists on lipid/DNA complexes in relation to gene therapy. These simulations in many cases however, are too simple and the structures formed are pre-imposed to a certain degree. Our model would attempt to simulate these interactions without such pre-imposed conditions relying solely on interactions between the particles to drive the structures being formed. A coarse graining approach in which several groups of atoms are subsumed into single interaction sites was deemed suitable given the complexity of modelling a mixture of DNA and lipids, together with the solvent and ion environment. In this regard new models of lipids, DNA, ion and solvent models were developed in a purpose built molecular dynamics package called LANKA-Lipid And Nucleic acid Komputer Algorithm. The lipids in the model are represented as polar ellipsoids and the solvent as spheres with dipoles embedded within them. The interactions between the lipids and solvent are modelled using the Gay Berne potential. The developed lipid model was able to self assemble into a stable bilayer phase and reproduce many bilayer properties of a liquid crystal phase. The model was then extended to capture some of the other lipid phases seen in nature, including lyotropic phase transitions. A simple study of lipid mixtures has also been undertaken during this period. The importance of considering multicomponent lipid systems has increasingly been highlighted in the literature to make the lipid models more realistic. The developed lipid models are simple enough to extend and attempt to simulate the formation of lipid rafts and domain formation. Simulation of DNA in the past has largely focused on atomistic studies. While these have proved valuable they do not consider the macroscopic length scales of the molecule. Simplified models trying to capture long length scales have had to compromise on the molecular level detail. Coarse grain models while trying to bridge the gap have also remained largely idealistic in nature.

Page generated in 0.0752 seconds