• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 14
  • 14
  • 14
  • 14
  • 14
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Defect correction based domain decomposition methods for some nonlinear problems

Siahaan, Antony January 2011 (has links)
Defect correction schemes as a class of nonoverlapping domain decomposition methods offer several advantages in the ways they split a complex problem into several subdomain problems with less complexity. The schemes need a nonlinear solver to take care of the residual at the interface. The adaptive-∝ solver can converge locally in the ∞-norm, where the sufficient condition requires a relatively small local neighbourhood and the problem must have a strongly diagonal dominant Jacobian matrix with a very small condition number. Yet its advantage can be of high signicance in the computational cost where it simply needs a scalar as the approximation of Jacobian matrix. Other nonlinear solvers employed for the schemes are a Newton-GMRES method, a Newton method with a finite difference Jacobian approximation, and nonlinear conjugate gradient solvers with Fletcher-Reeves and Pollak-Ribiere searching direction formulas. The schemes are applied to three nonlinear problems. The first problem is a heat conduction in a multichip module where there the domain is assembled from many components of different conductivities and physical sizes. Here the implementations of the schemes satisfy the component meshing and gluing concept. A finite difference approximation of the residual of the governing equation turns out to be a better defect equation than the equality of normal derivative. Of all the nonlinear solvers implemented in the defect correction scheme, the nonlinear conjugate gradient method with Fletcher-Reeves searching direction has the best performance. The second problem is a 2D single-phase fluid flow with heat transfer where the PHOENICS CFD code is used to run the subdomain computation. The Newton method with a finite difference Jacobian is a reasonable interface solver in coupling these subdomain computations. The final problem is a multiphase heat and moisture transfer in a porous textile. The PHOENICS code is also used to solve the system of partial differential equations governing the multiphase process in each subdomain while the coupling of the subdomain solutions is taken care of with some FORTRAN codes by the defect correction schemes. A scheme using a modified-∝ method fails to obtain decent solutions in both single and two layers case. On the other hand, the scheme using the above Newton method produces satisfying results for both cases where it can lead an initially distant interface data into a good convergent solution. However, it is found that in general the number of nonlinear iteration of the defect correction schemes increases with the mesh refinement.
2

Investigation of a teleo-reactive approach for the development of autonomic manager systems

Hawthorne, James January 2013 (has links)
As the demand for more capable and more feature-rich software increases, the complexity in design, implementation and maintenance also increases exponentially. This becomes a problem when the complexity prevents developers from writing, improving, fixing or otherwise maintaining software to meet specified demands whilst still reaching an acceptable level of robustness. When complexity becomes too great, the software becomes impossible to effectively be managed by even large teams of people. One way to address the problem is an Autonomic approach to software development. Autonomic software aims to tackle complexity by allowing the software to manage itself, thus reducing the need for human intervention and allowing it to reach a maintainable state. Many techniques have been investigated for development of autonomic systems including policy-based designs, utility-functions and advanced architectures. A unique approach to the problem is the teleo-reactive programming paradigm. This paradigm offers a robust and simple structure on which to develop systems. It allows the developer the freedom to express their intentions in a logical manner whilst the increased robustness reduces the maintenance cost. Teleo-Reactive programming is an established solution to low-level agent based problems such as robot navigation and obstacle avoidance, but this technique shows behaviour which is consistent with higher-level autonomic solutions. This project therefore investigates the extent of the applicability of teleo-reactive programming as an autonomic solution. Can the technique be adapted to allow a more ideal fitness for purpose' for autonomics whilst causing minimal changes to the tried and tested original structure and meaning? Does the technique introduce any additional problems and can these be addressed with improvements to the teleo-reactive framework? Teleo-Reactive programming is an interesting approach to autonomic computing because in a Teleo-Reactive program, its state is not predetermined at any moment in time and is based on a priority system where rules execute based on the current environmental context (i.e. not in any strict procedural way) whilst still aiming at the intended goal. This method has been shown to be very robust and exhibits some of the qualities of autonomic software.
3

An investigation into the feasibility, problems and benefits of re-engineering a legacy procedural CFD code into an event driven, object oriented system that allows dynamic user interaction

Ewer, John Andrew Clark January 2000 (has links)
This research started with questions about how the overall efficiency, reliability and ease-of-use of Computational Fluid Dynamics (CFD) codes could be improved using any available software engineering and Human Computer Interaction (HCI) techniques. Much of this research has been driven by the difficulties experienced by novice CFD users in the area of Fire Field Modelling where the introduction of performance based building regulations have led to a situation where non CFD experts are increasingly making use of CFD techniques, with varying degrees of effectiveness, for safety critical research. Formerly, such modelling has not been helped by the mode of use, high degree of expertise required from the user and the complexity of specifying a simulation case. Many of the early stages of this research were channelled by perceived limitations of the original legacy CFD software that was chosen as a framework for these investigations. These limitations included poor code clarity, bad overall efficiency due to the use of batch mode processing, poor assurance that the final results presented from the CFD code were correct and the requirement for considerable expertise on the part of users. The innovative incremental re-engineering techniques developed to reverse-engineer, re-engineer and improve the internal structure and usability of the software were arrived at as a by-product of the research into overcoming the problems discovered in the legacy software. The incremental reengineering methodology was considered to be of enough importance to warrant inclusion in this thesis. Various HCI techniques were employed to attempt to overcome the efficiency and solution correctness problems. These investigations have demonstrated that the quality, reliability and overall run-time efficiency of CFD software can be significantly improved by the introduction of run-time monitoring and interactive solution control. It should be noted that the re-engineered CFD code is observed to run more slowly than the original FORTRAN legacy code due, mostly, to the changes in calling architecture of the software and differences in compiler optimisation: but, it is argued that the overall effectiveness, reliability and ease-of-use of the prototype software are all greatly improved. Investigations into dynamic solution control (made possible by the open software architecture and the interactive control interface) have demonstrated considerable savings when using solution control optimisation. Such investigations have also demonstrated the potential for improved assurance of correct simulation when compared with the batch mode of processing found in most legacy CFD software. Investigations have also been conducted into the efficiency implications of using unstructured group solvers. These group solvers are a derivation of the simple point-by-point Jaccobi Over Relaxation (JOR) and Successive Over Relaxation (SOR) solvers [CROFT98] and using group solvers allows the computational processing to be more effectively targeted on regions or logical collections of cells that require more intensive computation. Considerable savings have been demonstrated for the use of both static- and dynamic- group membership when using these group solvers for a complex 3-imensional fire modelling scenario. Furthermore the improvements in the system architecture (brought about as a result of software re-engineering) have helped to create an open framework that is both easy to comprehend and extend. This is in spite of the underlying unstructured nature of the simulation mesh with all of the associated complexity that this brings to the data structures. The prototype CFD software framework has recently been used as the core processing module in a commercial Fire Field Modelling product (called "SMARTFIRE" [EWER99-1]). This CFD framework is also being used by researchers to investigate many diverse aspects of CFD technology including Knowledge Based Solution Control, Gaseous and Solid Phase Combustion, Adaptive Meshing and CAD file interpretation for ease of case specification.
4

Investigation into the interaction of people with signage systems and its implementation within evacuation models

Xie, Hui January 2011 (has links)
Signage systems are widely used in buildings in accordance with safety legislation and building standards. These aim to provide general information and safety messages to occupants, and assist them in wayfinding during both circulation and evacuation. Despite the fact that signage systems are an important component in building wayfinding systems, there is a lack of relevant data concerning how occupants perceive, interpret and use the information conveyed by emergency signage. The effectiveness of signage systems is therefore difficult to assess and is not correctly represented in any existing evacuation models. In this dissertation, this issue is addressed through two experiments and the modelling of the interaction with emergency signage based on the empirical findings. The first experiment involved measuring the maximum viewing distance of standard signs at various angles to produce an empirical representation of signage catchment area. The second experiment involved measuring the impact of a signage system on a population of 68 test subjects who were instructed to individually vacate a building by their own efforts. The evacuation path involved a number of decision points at which emergency signage was available to identify the appropriate path. Through analysis of data derived from questionnaires and video footage, the number of people who perceived and utilised the signage information to assist their egress is determined. The experimental results are utilised to enhance the capability of the buildingEXODUS software. Firstly, the signage catchment area is revised to more accurately represent the visibility limits of signage than previously modelled according to the definition of signage visibility by regulations. Secondly, the impact of smoke on signage visibility is introduced and the representation of the impact of smoke on occupant evacuation performance is improved based on existing published data. Finally, the signage detection and compliance probabilities are assigned values based on the experimental data rather than the ideal values previously assumed. The impact that the enhanced signage model has on evacuation analysis is demonstrated in hypothetical evacuation scenarios. The new signage model is shown to produce a more representative and realistic estimate of expected egress times than previously. It is hoped that this dissertation will improve our understanding of a key phenomena – the interaction of people with signage, and allow interested parties (e.g. engineers, safety managers and designers, etc.) to more effectively and credibly examine the impact of signage systems upon pedestrian and evacuee movement.
5

Domain partitioning and software modifications towards the parallelisation of the buildingEXODUS evacuation software

Mohedeen, Bibi Yasmina Yashanaz January 2011 (has links)
This thesis presents a parallel approach to evacuation modelling in order to aid real-time, large-scale procedure development. An extensive investigation into which partitioning strategy to employ with the parallel version of the software was researched so as to maximise its performance. The use of evacuation modelling is well established as part of building design to ensure buildings meet performance based safety and comfort criteria (such as the placements of windows or stairs so as to ease people‘s comfort) . A novel approach to using evacuation modelling is during live evacuations from various disasters. Disasters may be fast developing in large areas and incident commanders can use the model to plan safe escape routes to avoid danger areas. For this type of usage, very fast results must be obtainable in order for the incident commanders to optimise the evacuation plan along with the software‘s capability to simulate large-scale evacuation scenarios. buildingEXODUS provides very fast results for small-scale cases but struggles to give quick results for large-scale simulations. In addition, the loading up of large-scale cases are dependent on the specifications of the processor used thus making the problem case unscalable. A solution to address these shortcomings is the use of parallel computing. Large-scale cases can be partitioned and run by a network of processors, thus reducing the running time of the simulations as well as the ability to represent a large geometry by loading parts of the domain on each processor. This scheme was attempted and buildingEXODUS was successfully parallelised to cope with large-scale evacuation simulations. Various partitioning methods were attempted and due to the stochastic nature of every evacuation scenario, no definite partitioning strategy could be found. The efficiency values ranged from 230% (with both cores being used from 10 dual-core processors) when an idealised case was run to 23% for another test case. The results obtained were highly dependent on the test case‘s geometry, the scenario being applied, whether all the cores are being used in case of multi-cores processors, as well as the partitioning method used. However, the use of any partitioning method will produce an improvement from running the case in serial. On the other hand, the speedups obtained were not scalable to warrant the adoption of any particular partitioning method. The dominant criteria inhibiting the parallel system was processor idleness or overload rather than communication costs, thus degrading the performance of the parallel system. Hence an intelligent partition strategy was devised, which dynamically assesses the current situation of the parallel system and repartitions the problem accordingly to prevent processor idleness and overloading. A dynamic load reallocation method was implemented within the parallelised buildingEXODUS to cater for any degradation of the parallel system. At its best, the dynamic reallocation strategy produced an efficiency value of 93.55% and a value of 36.81% at its worse. As a direct comparison to the static partitioning strategy, an improvement was observed in most cases run. A maximum improvement of 96.48% was achieved from using the dynamic reallocation strategy compared to using a static partitioning approach. Hence the parallelisation of the buildingEXODUS evacuation software was successfully implemented with most cases achieving encouraging speedup values when a dynamic repartitioning strategy was employed.
6

Implementing a hybrid spatial discretisation within an agent based evacuation model

Chooramun, Nitish January 2011 (has links)
Within all evacuation and pedestrian dynamics models, the physical space in which the agents move and interact is represented in some way. Models typically use one of three basic approaches to represent space namely a continuous representation of space, a fine network of nodes or a coarse network of nodes. Each approach has its benefits and limitations; the continuous approach allows for an accurate representation of the building space and the movement and interaction of individual agents but suffers from relative poor computational performance; the coarse nodal approach allows for very rapid computation but suffers from an inability to accurately represent the physical interaction of individual agents with each other and with the structure. The fine nodal approach represents a compromise between the two extremes providing an ability to represent the interaction of agents while providing good computational performance. This dissertation is an attempt to develop a technology which encompasses the benefits of the three spatial representation methods and maximises computational efficiency while providing an optimal environment to represent the movement and interaction of agents. This was achieved through a number of phases. The initial part of the research focused on the investigation of the spatial representation technique employed in current evacuation models and their respective capabilities. This was followed by a comprehensive review of the current state of knowledge regarding circulation and egress data. The outcome of the analytical phases provided a foundation for eliciting the failings in current evacuation models and identifying approaches which would be conducive towards the sophistication of the current state of evacuation modelling. These concepts led to the generation of a blueprint comprising of algorithmic procedures, which were used as input in the implementation phase. The buildingEXODUS evacuation model was used as a computational shell for the deployment of the new procedures. This shell features a sophisticated plug-in architecture which provided the appropriate platform for the incremental implementation, validation and integration of the newly developed models. The Continuous Model developed during the implementation phase comprises of advanced algorithms which provide a more detailed and thorough representation of human behaviour and movement. Moreover, this research has resulted in the development of a novel approach, called Hybrid Spatial Discretisation (HSD), which provides the flexibility of using a combination of fine node networks, coarse node networks and continuous regions for spatial representations in evacuation models. Furthermore, the validation phase has demonstrated the suitability and scalability of the HSD approach towards modelling the evacuation of large geometries while maximising computational efficiency.
7

Penalized regression methods with application to generalized linear models, generalized additive models, and smoothing

Utami Zuliana, Sri January 2017 (has links)
Recently, penalized regression has been used for dealing problems which found in maximum likelihood estimation such as correlated parameters and a large number of predictors. The main issues in this regression is how to select the optimal model. In this thesis, Schall’s algorithm is proposed as an automatic selection of weight of penalty. The algorithm has two steps. First, the coefficient estimates are obtained with an arbitrary penalty weight. Second, an estimate of penalty weight λ can be calculated by the ratio of the variance of error and the variance of coefficient. The iteration is continued from step one until an estimate of penalty weight converge. The computational cost is minimized because the optimal weight of penalty could be obtained within a small number of iterations. In this thesis, Schall’s algorithm is investigated for ridge regression, lasso regression and two-dimensional histogram smoothing. The proposed algorithm are applied to real data sets and simulation data sets. In addition, a new algorithm for lasso regression is proposed. The performance of results of the algorithm was almost comparable in all applications. Schall’s algorithm can be an efficient algorithm for selection of weight of penalty.
8

Speech recognition by computer : algorithms and architectures

Tyler, J. E. M. January 1988 (has links)
This work is concerned with the investigation of algorithms and architectures for computer recognition of human speech. Three speech recognition algorithms have been implemented, using (a) Walsh Analysis, (b) Fourier Analysis and (c) Linear Predictive Coding. The Fourier Analysis algorithm made use of the Prime-number Fourier Transform technique. The Linear Predictive Coding algorithm made use of LeRoux and Gueguen's method for calculating the coefficients. The system was organised so that the speech samples could be input to a PC/XT microcomputer in a typical office environment. The PC/XT was linked via Ethernet to a Sun 2/180s computer system which allowed the data to be stored on a Winchester disk so that the data used for testing each algorithm was identical. The recognition algorithms were implemented entirely in Pascal, to allow evaluation to take place on several different machines. The effectiveness of the algorithms was tested with a group of five naive speakers, results being in the form of recognition scores. The results showed the superiority of the Linear Predictive Coding algorithm, which achieved a mean recognition score of 93.3%. The software was implemented on three different computer systems. These were an 8-bit microprocessor, a sixteen-bit microcomputer based on the IBM PC/XT, and a Motorola 68020 based Sun Workstation. The effectiveness of the implementations was measured in terms of speed of execution of the recognition software. By limiting the vocabulary to ten words, it has been shown that it would be possible to achieve recognition of isolated utterances in real time using a single 68020 microprocessor. The definition of real time in this context is understood to mean that the recognition task will on average, be completed within the duration of the utterance, for all the utterances in the recogniser's vocabulary. A speech recogniser architecture is proposed which would achieve real time speech recognition without any limitation being placed upon (a) the order of the transform, and (b) the size of the recogniser's vocabulary. This is achieved by utilising a pipeline of four processors, with the pattern matching process performed in parallel on groups of words in the vocabulary.
9

The use of some non-minimal representations to improve the effectiveness of genetic algorithms

Robbins, Phil January 1995 (has links)
In the unitation representation used in genetic algorithms, the number of genotypes that map onto each phenotype varies greatly. This leads to an attractor in phenotype space which impairs the performance of the genetic algorithm. The attractor is illustrated theoretically and empirically. A new representation, called the length varying representation (LVR), allows unitation chromosomes of varying length (and hence with a variety of attractors) to coexist. Chromosomes whose lengths yield attractors close to optima come to dominate the population. The LVR is shown to be more effective than the unitation representation against a variety of fitness functions. However, the LVR preferentially converges towards the low end of phenotype space. The phenotype shift representation (PSR), which retains the ability of the LVR to select for attractors that are close to optima, whilst using a fixed length chromosome and thus avoiding the asymmetries inherent in the LVR, is defined. The PSR is more effective than the LVR and the results compare favourably with previously published results from eight other algorithms. The internal operation of the PSR is investigated. The PSR is extended to cover multi-dimensional problems. The premise that improvements in performance may be attained by the insertion of introns, non-coding sequences affecting linkage, into traditional bit string chromosomes is investigated. In this investigation, using a population size of 50, there was no evidence of improvement in performance. However, the position of the optima relative to the hamming cliffs is shown to have a major effect on the performance of the genetic algorithm using the binary representation, and the inadequacy of the traditional crossover and mutation operators in this context is demonstrated. Also, the disallowance of duplicate population members was found to improve performance over the standard generational replacement strategy in all trials.
10

On the evaluation of aggregated web search

Zhou, Ke January 2014 (has links)
Aggregating search results from a variety of heterogeneous sources or so-called verticals such as news, image and video into a single interface is a popular paradigm in web search. This search paradigm is commonly referred to as aggregated search. The heterogeneity of the information, the richer user interaction, and the more complex presentation strategy, make the evaluation of the aggregated search paradigm quite challenging. The Cranfield paradigm, use of test collections and evaluation measures to assess the effectiveness of information retrieval (IR) systems, is the de-facto standard evaluation strategy in the IR research community and it has its origins in work dating to the early 1960s. This thesis focuses on applying this evaluation paradigm to the context of aggregated web search, contributing to the long-term goal of a complete, reproducible and reliable evaluation methodology for aggregated search in the research community. The Cranfield paradigm for aggregated search consists of building a test collection and developing a set of evaluation metrics. In the context of aggregated search, a test collection should contain results from a set of verticals, some information needs relating to this task and a set of relevance assessments. The metrics proposed should utilize the information in the test collection in order to measure the performance of any aggregated search pages. The more complex user behavior of aggregated search should be reflected in the test collection through assessments and modeled in the metrics. Therefore, firstly, we aim to better understand the factors involved in determining relevance for aggregated search and subsequently build a reliable and reusable test collection for this task. By conducting several user studies to assess vertical relevance and creating a test collection by reusing existing test collections, we create a testbed with both the vertical-level (user orientation) and document-level relevance assessments. In addition, we analyze the relationship between both types of assessments and find that they are correlated in terms of measuring the system performance for the user. Secondly, by utilizing the created test collection, we aim to investigate how to model the aggregated search user in a principled way in order to propose reliable, intuitive and trustworthy evaluation metrics to measure the user experience. We start our investigations by studying solely evaluating one key component of aggregated search: vertical selection, i.e. selecting the relevant verticals. Then we propose a general utility-effort framework to evaluate the ultimate aggregated search pages. We demonstrate the fidelity (predictive power) of the proposed metrics by correlating them to the user preferences of aggregated search pages. Furthermore, we meta-evaluate the reliability and intuitiveness of a variety of metrics and show that our proposed aggregated search metrics are the most reliable and intuitive metrics, compared to adapted diversity-based and traditional IR metrics. To summarize, in this thesis, we mainly demonstrate the feasibility to apply the Cranfield Paradigm for aggregated search for reproducible, cheap, reliable and trustworthy evaluation.

Page generated in 0.284 seconds