• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Domain partitioning and software modifications towards the parallelisation of the buildingEXODUS evacuation software

Mohedeen, Bibi Yasmina Yashanaz January 2011 (has links)
This thesis presents a parallel approach to evacuation modelling in order to aid real-time, large-scale procedure development. An extensive investigation into which partitioning strategy to employ with the parallel version of the software was researched so as to maximise its performance. The use of evacuation modelling is well established as part of building design to ensure buildings meet performance based safety and comfort criteria (such as the placements of windows or stairs so as to ease people‘s comfort) . A novel approach to using evacuation modelling is during live evacuations from various disasters. Disasters may be fast developing in large areas and incident commanders can use the model to plan safe escape routes to avoid danger areas. For this type of usage, very fast results must be obtainable in order for the incident commanders to optimise the evacuation plan along with the software‘s capability to simulate large-scale evacuation scenarios. buildingEXODUS provides very fast results for small-scale cases but struggles to give quick results for large-scale simulations. In addition, the loading up of large-scale cases are dependent on the specifications of the processor used thus making the problem case unscalable. A solution to address these shortcomings is the use of parallel computing. Large-scale cases can be partitioned and run by a network of processors, thus reducing the running time of the simulations as well as the ability to represent a large geometry by loading parts of the domain on each processor. This scheme was attempted and buildingEXODUS was successfully parallelised to cope with large-scale evacuation simulations. Various partitioning methods were attempted and due to the stochastic nature of every evacuation scenario, no definite partitioning strategy could be found. The efficiency values ranged from 230% (with both cores being used from 10 dual-core processors) when an idealised case was run to 23% for another test case. The results obtained were highly dependent on the test case‘s geometry, the scenario being applied, whether all the cores are being used in case of multi-cores processors, as well as the partitioning method used. However, the use of any partitioning method will produce an improvement from running the case in serial. On the other hand, the speedups obtained were not scalable to warrant the adoption of any particular partitioning method. The dominant criteria inhibiting the parallel system was processor idleness or overload rather than communication costs, thus degrading the performance of the parallel system. Hence an intelligent partition strategy was devised, which dynamically assesses the current situation of the parallel system and repartitions the problem accordingly to prevent processor idleness and overloading. A dynamic load reallocation method was implemented within the parallelised buildingEXODUS to cater for any degradation of the parallel system. At its best, the dynamic reallocation strategy produced an efficiency value of 93.55% and a value of 36.81% at its worse. As a direct comparison to the static partitioning strategy, an improvement was observed in most cases run. A maximum improvement of 96.48% was achieved from using the dynamic reallocation strategy compared to using a static partitioning approach. Hence the parallelisation of the buildingEXODUS evacuation software was successfully implemented with most cases achieving encouraging speedup values when a dynamic repartitioning strategy was employed.
162

Implementing a hybrid spatial discretisation within an agent based evacuation model

Chooramun, Nitish January 2011 (has links)
Within all evacuation and pedestrian dynamics models, the physical space in which the agents move and interact is represented in some way. Models typically use one of three basic approaches to represent space namely a continuous representation of space, a fine network of nodes or a coarse network of nodes. Each approach has its benefits and limitations; the continuous approach allows for an accurate representation of the building space and the movement and interaction of individual agents but suffers from relative poor computational performance; the coarse nodal approach allows for very rapid computation but suffers from an inability to accurately represent the physical interaction of individual agents with each other and with the structure. The fine nodal approach represents a compromise between the two extremes providing an ability to represent the interaction of agents while providing good computational performance. This dissertation is an attempt to develop a technology which encompasses the benefits of the three spatial representation methods and maximises computational efficiency while providing an optimal environment to represent the movement and interaction of agents. This was achieved through a number of phases. The initial part of the research focused on the investigation of the spatial representation technique employed in current evacuation models and their respective capabilities. This was followed by a comprehensive review of the current state of knowledge regarding circulation and egress data. The outcome of the analytical phases provided a foundation for eliciting the failings in current evacuation models and identifying approaches which would be conducive towards the sophistication of the current state of evacuation modelling. These concepts led to the generation of a blueprint comprising of algorithmic procedures, which were used as input in the implementation phase. The buildingEXODUS evacuation model was used as a computational shell for the deployment of the new procedures. This shell features a sophisticated plug-in architecture which provided the appropriate platform for the incremental implementation, validation and integration of the newly developed models. The Continuous Model developed during the implementation phase comprises of advanced algorithms which provide a more detailed and thorough representation of human behaviour and movement. Moreover, this research has resulted in the development of a novel approach, called Hybrid Spatial Discretisation (HSD), which provides the flexibility of using a combination of fine node networks, coarse node networks and continuous regions for spatial representations in evacuation models. Furthermore, the validation phase has demonstrated the suitability and scalability of the HSD approach towards modelling the evacuation of large geometries while maximising computational efficiency.
163

High-performance computing for computational biology of the heart

McFarlane, Ross January 2010 (has links)
This thesis describes the development of Beatbox — a simulation environment for computational biology of the heart. Beatbox aims to provide an adaptable, approachable simulation tool and an extensible framework with which High Performance Computing may be harnessed by researchers. Beatbox is built upon the QUI software package, which is studied in Chapter 2. The chapter discusses QUI’s functionality and common patterns of use, and describes its underlying software architecture, in particular its extensibility through the addition of new software modules called ‘devices’. The chapter summarises good practice for device developers in the Laws of Devices. Chapter 3 discusses the parallel architecture of Beatbox and its implementation for distributed memory clusters. The chapter discusses strategies for domain decomposition, halo swapping and introduces an efficient method for exchange of data with diagonal neighbours called Magic Corners. The development of Beatbox’s parallel Input/Output facilities is detailed, and its impact on scaling performance discussed. The chapter discusses the way in which parallelism can be hidden from the user, even while permitting the runtime execution user-defined functions. The chapter goes on to show how QUI’s extensibility can be continued in a parallel environment by providing implicit parallelism for devices and defining Laws of Parallel Devices to guide third-party developers. Beatbox’s parallel performance is evaluated and discussed. Chapter 4 describes the extension of Beatbox to simulate anatomically realistic tissue geometry. Representation of irregular geometries is described, along with associated user controls. A technique to compute no-flux boundary conditions on irregular boundaries is introduced. The Laws of Devices are further developed to include irregular geometries. Finally, parallel performance of anatomically realistic meshes is evaluated.
164

An algorithm for computing short-range forces in molecular dynamics simulations with non-uniform particle densities

Law, Timothy R. January 2017 (has links)
We develop the projection sorting algorithm, used to compute pairwise short-range interaction forces between particles in molecular dynamics simulations. We contrast this algorithm to the state of the art and discuss situations where it may be particularly effective. We then explore the efficient implementation of the projection sorting algorithm in both on-node (shared memory parallel) and off-node (distributed memory parallel) environments. We provide AVX, AVX2, KNC and AVX-512 intrinsic implementations of the force calculation kernel. We use the modern multi- and many-core architectures: Intell Haswell, Broadwell Knights Corner (KNC) and Knights Landing (KNL), as representative slice of modern High Performance Computing (HPC) installations. In the course of implementation we use our algorithm as a means of optimising a contemporary biophysical molecular dynamics simulation of chromosome condensation. We compare state-of-the-art Molecular Dynamics (MD) algorithms and projection sorting, and experimentally demonstrate the performance gains possible with our algorithm. These experiments are carried out in single- and multi-node configurations. We observe speedups of up to 5x when comparing our algorithm to the state of the art, and up to 10x when compared to the original unoptimised simulation. These optimisations have directly affected the ability of domain scientists to carry out their work.
165

A visual adaptive authoring framework for adaptive hypermedia

Khan, Javed Arif January 2018 (has links)
In a linear hypermedia system, all users are offered a standard series of hyperlinks. Adaptive Hypermedia (AH) tailors what the user sees to the user's goals, abilities, interests, knowledge and preferences. Adaptive Hypermedia is said to be the answer to the 'lost in hyperspace' phenomenon, where the user has too many hyperlinks to choose from, and has little knowledge to select the most appropriate hyperlink. AH offers a selection of links and content that is most appropriate to the current user. In an Adaptive Educational Hypermedia (AEH) course, a student's learning experiences can be personalised using a User Model (UM), which could include information such as the student's knowledge level, preferences and culture. Beside these basic components, a Goal Model (GM) can represent the goal the users should meet and a Domain Model (DM) would represent the knowledge domain. Adaptive strategies are sets of adaptive rules that can be applied to these models, to allow the personalisation of the course for students, according to their needs. From the many interacting elements, it is clear that the authoring process is a bottleneck in the adaptive course creation, which needs to be improved in terms of interoperability, usability and reuse of the adaptive behaviour (strategies). Authoring of Adaptive Hypermedia is considered to be difficult and time consuming. There is great scope for improving authoring tools in Adaptive Educational Hypermedia system, to aid already burdened authors to create adaptive courses easily. Adaptation specifications are very useful in creating adaptive behaviours, to support the needs of a group of learners. Authors often lack the time or the skills needed to create new adaptation specifications from scratch. Creating an adaptation specification requires the author to know and remember the programming language syntax, which places a knowledge barrier for the author. LAG is a complete and useful programming language, which, however, is considered too complex for authors to deal with directly. This thesis thus proposes a visual framework (LAGBlocks) for the LAG adaptation language and an authoring tool (VASE) to utilise the proposed visual framework, to create adaptive specifications, by manipulating visual elements. It is shown that the VASE authoring tool along with the visual framework enables authors to create adaptive specifications with ease and assist authors in creating adaptive specifications which promote the "separation of concern". The VASE authoring tool offers code completeness, correctness at design time, and also allows for adaptive strategies to be used within other tools for adaptive hypermedia. The goal is thus to make adaptive specifications easier, to create and to share for authors with little or no programming knowledge and experience. This thesis looks at three aspects of authoring in adaptive educational hypermedia systems. The first aspect of the thesis is concerned with problems faced by the author of an adaptive hypermedia system; the second aspect is concerned with describing the findings gathered from investigating the previously developed authoring tools; and the final aspect of the thesis is concerned with the proposal, the implementation and the evaluation of a new authoring tool that improves the authoring process for authors with different knowledge, background and experience. The purpose of the new tool, VASE, is to enable authors to create adaptive strategies in a puzzle-building manner; moreover, the created adaptive strategies could be used within (are compatible with) other systems in adaptive hypermedia, which use the LAG programming language.
166

Towards a model of giftedness in programming : an investigation of programming characteristics of gifted students at University of Warwick

Qahmash, Ayman January 2018 (has links)
This study investigates characteristics related to learning programming for gifted first year computer science students. These characteristics include mental representations, knowledge representations, coding strategies, and attitudes and personality traits. This study was motivated by developing a theoretical framework to define giftedness in programming. In doing so, it aims to close the gap between gifted education and computer science education, allowing gifted programmers to be supported. Previous studies indicated a lack of theoretical foundation of gifted education in computer science, especially for identifying gifted programmers, which may have resulted in identification process concerns and/or inappropriate support. The study starts by investigating the relationship between mathematics and programming. We collected 3060 records of raw data of students' grades from 1996 to 2015. Descriptive statistics and the Pearson product-moment correlation test were used for the analysis. The results indicate a statistically significant positive correlation between mathematics and programming in general and between specific mathematics and programming modules. The study evolves to investigate other programming-related characteristics using case study methodology and collecting quantitative and qualitative data. A sample of n=9 cases of gifted students was selected and was interviewed. In addition, we collected the students' grades, code-writing problems and project (Witter) source codes and analysed these data using specific analysis procedures according to each method. The results indicate that gifted student programmers might possess a single or multiple characteristics that have large overlaps. We introduced a model to define giftedness in programming that consists of three profiles: mathematical ability, creativity and personal traits, and each profile consists of sub-characteristics.
167

High-dimensional-output surrogate models for uncertainty and sensitivity analyses

Triantafyllidis, Vasileios January 2018 (has links)
Computational models that describe complex physical phenomena tend to be computationally expensive and time consuming. Partial differential equation (PDE) based models in particular produce spatio-temporal data sets in high dimensional output spaces. Repeated calls of computer models to perform tasks such as sensitivity analysis, uncertainty quantification and design optimization can become computationally infeasible as a result. While constructing an emulator is one solution to approximate the outcome of expensive computer models, it is not always capable of dealing with high-dimensional data sets. To deal with high-dimensional data, in this thesis emulation strategies (Gaussian processes (GPs), artificial neural networks (ANNs) and support vector machines (SVMs)) are combined with linear and non-linear dimensionality reduction techniques (kPCA, Isomap and diffusion maps) to develop efficient emulators. For sensitivity analysis (variance based), a probabilistic framework is developed to account for the emulator uncertainty and the method is extended to multivariate outputs, with a derivation of new semi-analytical results for performing rapid sensitivity analysis of univariate or multivariate outputs. The developed emulators are also used to extend reduced order models (ROMs) based on proper orthogonal decomposition to parameter-dependent PDEs, including an extension of the discrete empirical interpolation method for non-linear problems PDE systems.
168

The use of a Kinect-based technology within the school environment to enhance sensory-motor skills of children with autism

Mademtzi, Marilena January 2016 (has links)
This research explored the effect of Pictogram Room, a Kinect-based technology, on the sensory-motor skills of children with autism in a school setting. It focused on the overall development of sensory-motor skills, how these skills developed in different environments, and which of the sensory-motor subdomains improved the most. Theoretically, the study drew upon gaming theory and embodied cognition. It was a mixed methods study, with the quantitative data being the dominant method of data collection and the qualitative data having a more supportive role. During the first year, the intervention was implemented with the intervention group (n=5), twice a week for 15 minutes, over the course of nine weeks. The following year, a wait-list control group was recruited (n=5). The findings from the researcher’s checklist, as well as those from the standardised assessments, showed that sensory-motor skills in the intervention group were significantly improved, and there was also generalisation of these skills to other environments. Finally, as a result of the teachers’ interviews, social play and adaptive behaviours were also evaluated, with positive results for the intervention group.
169

Perceptions and practice of Gov2.0 in English local government

Barrance, Thomas Alexander January 2016 (has links)
Gov2.0 is an emerging and contested subject that offers a radical alternative to the construction of relationships between residents and their local authorities. This research investigates the practice of Gov2.0 and practitioners’ perceptions of this in English local authorities. The research combines analysis of practices through a content analysis of 50 principal local authority web sites and use of Q-methodology to identify the shared subjective frames of reference of 52 local government actors. The literature surrounding Gov2.0 is found to be lacking a clear theoretical model. A model is presented as a basis for an exploration of the practice and common understanding of the subject. Levels of inconsistency in adoption of Gov2.0 that are not defined by political party control, geography or authority governance structure are identified. The results of the Q-methodology examination of individual perspectives are discussed, and four frames of reference which provide a foundation for variations of practice observed are proposed. This research offers a theoretical model for understanding Gov2.0; it identifies four distinct frames of reference held by practitioners regarding Gov2.0 and presents an analysis of the range of adoption practices within English local authorities.
170

Modelling metrical flux : an adaptive frequency neural network for expressive rhythmic perception and prediction

Elmsley, Andrew J. January 2017 (has links)
Beat induction is the perceptual and cognitive process by which humans listen to music and perceive a steady pulse. Computationally modelling beat induction is important for many Music Information Retrieval (MIR) methods and is in general an open problem, especially when processing expressive timing, e.g. tempo changes or rubato. A neuro-cognitive model has been proposed, the Gradient Frequency Neural Network (GFNN), which can model the perception of pulse and metre. GFNNs have been applied successfully to a range of ‘difficult’ music perception problems such as polyrhythms and syncopation. This thesis explores the use of GFNNs for expressive rhythm perception and modelling, addressing the current gap in knowledge for how to deal with varying tempo and expressive timing in automated and interactive music systems. The cannonical oscillators contained in a GFNN have entrainment properties, allowing phase shifts and resulting in changes to the observed frequencies. This makes them good candidates for solving the expressive timing problem. It is found that modelling a metrical perception with GFNNs can improve a machine learning music model. However, it is also discovered that GFNNs perform poorly when dealing with tempo changes in the stimulus. Therefore, a novel Adaptive Frequency Neural Network (AFNN) is introduced; extending the GFNN with a Hebbian learning rule on oscillator frequencies. Two new adaptive behaviours (attraction and elasticity) increase entrainment in the oscillators, and increase the computational efficiency of the model by allowing for a great reduction in the size of the network. The AFNN is evaluated over a series of experiments on sets of symbolic and audio rhythms both from the literature and created specifically for this research. Where previous work with GFNNs has focused on frequency and amplitude responses, this thesis considers phase information as critical for pulse perception. Evaluating the time-based output, it was found that AFNNs behave differently to GFNNs: responses to symbolic stimuli with both steady and varying pulses are significantly improved, and on audio data the AFNNs performance matches the GFNN, despite its lower density. The thesis argues that AFNNs could replace the linear filtering methods commonly used in beat tracking and tempo estimation systems, and lead to more accurate methods.

Page generated in 0.0532 seconds