• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Developing a multi-methodological approach to hospital operating theatre scheduling

Penn, Marion Louise January 2014 (has links)
Operating theatres and surgeons are among the most expensive resources in any hospital, so it is vital that they are used efficiently. Due to the complexity of the challenges involved in theatre scheduling we split the problem into levels and address the tactical and day-to-day scheduling problems. Cognitive mapping is used to identify the important factors to consider in theatre scheduling and their interactions. This allows development and testing of our understanding with hospital staff, ensuring that the aspects of theatre scheduling they consider important are included in the quantitative modelling. At the tactical level, our model assists hospitals in creating new theatre timetables, which take account of reducing the maximum number of beds required, surgeons’ preferences, surgeons’ availability, variations in types of theatre and their suitability for different types of surgery, limited equipment availability and varying the length of the cycle over which the timetable is repeated. The weightings given to each of these factors can be varied allowing exploration of possible timetables. At the day-to-day scheduling level we focus on the advanced booking of individual patients for surgery. Using simulation a range of algorithms for booking patients are explored, with the algorithms derived from a mixture of scheduling literature and ideas from hospital staff. The most significant result is that more efficient schedules can be achieved by delaying scheduling as close to the time of surgery as possible, however, this must be balanced with the need to give patients adequate warning to make arrangements to attend hospital for their surgery. The different stages of this project present different challenges and constraints, therefore requiring different methodologies. As a whole this thesis demonstrates that a range of methodologies can be applied to different stages of a problem to develop better solutions.
332

The dynamics of differentially rotating neutron stars

Watts, Anna Louise January 2003 (has links)
This thesis investigates the effect of rapid accretion and differential rotation on neutron star oscillations. The research is motivated by the fact that vibrating neutron stars are a promising source of gravitational waves. The first part of the thesis is a study of a nascent neutron star accreting supernova remnant material. We model an unstable r-mode oscillation that leads to the emission of gravitational waves, and the torques and heating associated with rapid accretion onto a star with a magnetic field. We consider the consequences for both gravitational wave emission and the rotation rate of the star. The main part of the thesis addresses differential rotation. This is likely to arise at times, such as the immediate aftermath of the supernova, when we expect strong vibrations. We focus on two factors unique to differentially rotating systems; dynamical shear instabilities, and the existence of a corotation band (a frequency band in which mode pattern speed matches the local angular velocity). Using a simple model, we find dynamical shear instabilities that arise where modes cross into the corotation band, if the degree of differential rotation exceeds a certain threshold. Recently, several authors have reported the discovery of dynamical instabilities in differentially rotating stars at low values of the ratio of kinetic to potential energy. We demonstrate that our instability mechanism explains all of the reported features of these instabilities. We also investigate the nature of oscillations within the corotation band. The band gives rise to a continuous spectrum whose collective physical perturbation exhibits complicated temporal behaviour. We also report the existence of modes within the continuous spectrum that appear physically indistinguishable from the discrete modes outside the band, despite the singular nature of their eigenfunctions.
333

Micromagnetic simulations of three dimensional core-shell nanostructures

Knittel, Andreas January 2011 (has links)
In the last 20 years, computer simulations, based on the micromagnetic model, have become an important tool for the characterisation of ferromagnetic structures. This work mainly uses the finite-element (FE) based micromagnetic solver Nmag to analyse the magnetic properties of ferromagnetic shell structures of different shapes and with dimensions below one micrometre. As the magnetic properties of structures in this size regime depend crucially on their shape, they have a potential towards engineering by shape manipulation. The finite-element method (FEM) discretises the micromagnetic equations on an unstructured mesh and, thus, is suited to model structures of arbitrary shape. The standard way to compute the magnetostatic potential within FE based micromagnetics is to use the hybrid finite element method / boundary element method (FEM/BEM), which, however, becomes computationally expensive for structures with a large surface. This work increases the efficiency of the hybrid FEM/BEM by using a data-sparse matrix type (hierarchical matrices) in order to extend the range of structures accessible by micromagnetic simulations. It is shown that this approximation leads only to negligible errors. The performed micromagnetic simulations include the finding of (meta-)stable micromagnetic states and the analysis of the magnetic reversal behaviour along certain spatial directions at different structure sizes and shell thicknesses. In the case of pyramidal shell structures a phase diagram is delineated which specifies the micromagnetic ground state as a function of structure size and shell thickness. An additional study demonstrates that a simple micromagnetic model can be used to qualitatively understand the magnetic reversal of a triangular platelet-shaped core-shell structure, which exhibits specific magnetic properties, as its core material becomes superconducting below a certain critical field Hcrit.
334

Modelling of multiphase multicomponent chemically reacting flows through packed beds

Koopmans, Robert-Jan January 2013 (has links)
Currently used rocket propellants such as hydrazine, monomethylhydrazine, unsymmetrical dimethylhydrazine and nitrogen tetroxide are carcinogenic and toxic to the environment and therefore special protective measures are required when producing, transporting, storing and handling them. Employing alternatives could possibly save costs and this has revived the research interest in so called green propellants. Hydrogen peroxide is such a possible alternative. It requires a catalyst bed to decompose the liquid peroxide into steam and oxygen. The purpose of this work is to design numerical tools that describe the processes in the catalyst bed and subsequently employ these tools to predict the performance of the catalyst bed and investigate the influence of design choices on the performance. In contrast to the models described in the literature, the tools developed in this thesis are two fluid models. In order to test the reliability of the tools results are compared with experimental data. A single control volume two-fluid model has been developed to investigate the pressure drop over the catalyst bed and the influence of the shape and size of catalyst pellets on the pressure drop. Parametric studies with this model revealed that the Tallmadge equation gives a better prediction of the pressure gradient than the more traditionally employed Ergun equation. It was also found that for a given bed length cylindrical pellets with a diameter to length ratio of 2 or more give a lower pressure drop than cylindrical pellets, while achieving the same level of decomposition. A one-dimensional two-fluid model has been developed to obtain longitudinal variations of fluid properties. This model revealed that the catalyst bed can be divided into 3 sections: a pre-boiling region, rapid conversion region and a dry-out region. It was shown that most of the mass transfer takes place due to evaporation. A sensitivity analysis showed that the gas-liquid interfacial area hardly influences the results.
335

X-Ray structure of porphobilinogen deaminase from A. Thaliana at 1.5A resolution

Roberts, Andrea January 2008 (has links)
The enzyme, porphobilinogen deaminase (PBGD), catalyses the stepwise polymerization of the monopyrrole, porphobilinogen (PBG), to give the linear hydroxymethylbilane, preuroporphyrinogen. Preuroporphyrinogen is then cyclised, with rearrangement, to give uroporphyrinogen III, the ubiquitous precursor of haems, chlorophylls and corrins. In Arabidopsis thaliana, PBGD is the fifth enzyme of the chlorophyll biosynthetic pathway. The mature A. thaliana PBGD protein of 320 amino acids was expressed from two synthetic genes using the pT7-7 vector in Escherichia coli strain BL21 (DE3). One construct was identical to the nucleotide sequence of the A. thaliana HEMC (AT5G08280) coding region and the other was similar, but with an E. coli codon bias. Neither recombinant enzyme contained the chloroplast import sequence and possessed N-termini of NH2-MVAV… rather than NH2- CVAV… A high proportion (over 90%) of the expressed protein was found to be insoluble and much time was spent increasing the yield of soluble enzyme and obtaining sufficient material for crystal preparation. No significant differences in expression were noted for the two constructs and both purified enzymes had Mr values of 34,928 ± 4 as measured by mass spectrometry. Crystals were obtained from screens using 30% PEG 4000, 0.1M NaAc and 0.1M MgCl2, pH 4.6, containing 2mM dithothreitol. Data from suitable crystals were collected at the ESRF at Grenoble and were in space group C2 with unit cell dimensions of 142.1Å x 37.36Å x 55.37Å; α=90.00º, β=104.83º and γ=90.00º. A structure for the A. thaliana PBGD was obtained at 1.5Å resolution by molecular replacement using the E. coli PBDG enzyme as search model. Programmes used for the refinement included the CCP4 suite, MOSFLM, SORTMTZ, SCALA, TRUNCATE and HKL VIEW. The structure of the A. thaliana PBGD enzyme shows the presence of three domains, each of approximately 100 residues. A deep cavity between domains I and II constitutes the active site and harbours the dipyrromethane cofactor. Domain III provides the attachment site for the cofactor which is covalently bound to Cys 254. The structure shows, for the first time, a flap or “lid” over the active site, not previously observed in the E. coli and human PBGD structures. The differences and similarities between the A. thaliana PBGD structure and deaminase structures from E. coli and human sources are discussed. As security, a selenomethionine derivative of the enzyme was also prepared and crystals were obtained for possible multiwavelength anomalous dispersion (MAD) experiments. Two mutants of A. thaliana PBGD, D95N and R161K, were prepared and the proteins were isolated. The D95N mutation led to an inactive enzyme, whereas the R161K mutation yielded an enzyme with 10% activity, and a lowered pH optimum, since the mutation substituted one of the six conserved active site arginine residues. The thesis presents, for the first time, the X-ray structure of a PBGD from a higher plant, Arabidopsis thaliana, and is the first time that the “lid” over the active site has been resolved. The importance of the active site “lid” in the functioning of the enzyme is discussed.
336

Modelling breakdown durations in simulation models of engine assembly lines

Lu, Lanting January 2009 (has links)
Machine failure is often an important source of variability and so it is essential to model breakdowns in manufacturing simulation models accurately. This thesis describes the modelling of machine breakdown durations in simulation models of engine assembly lines. To simplify the inputs to the simulation models for complex machining and assembly lines, the Arrows classification method has been derived to group machines with similar distributions of breakdown durations, where the Two-Sample Cram´er-von Mises statistic and bootstrap resampling are used to measure the similarity of two sets of data. We use finite mixture distributions fitted to the breakdown durations data of groups of machines as the input models for the simulation models. We evaluate the complete modelling methodology that involves the use of the Arrows classification method and finite mixture distributions, by analysing the outputs of the simulation models using different input distributions for describing the machine breakdown durations. Details of the methods and results of the grouping processes will be presented, and will be demonstrated using examples.
337

Probabilistic finite element analysis of the uncemented total hip replacement

Dopico Gonzalez, Carolina January 2009 (has links)
There are many interacting factors aecting the performance of a total hip replacement (THR), such as prosthesis design and material properties, applied loads, surgical approach, femur size and quality, interface conditions etc. All these factors are subject to variation and therefore uncertainties have to be taken into account when designing and analysing the performance of these systems. To address this problem, probabilistic design methods have been developed. A computational probabilistic tool to analyse the performance of an uncemented THR has been developed. Monte Carlo Simulation (MCS) was applied to various models with increasing complexity. In the pilot models, MCS was applied to a simplied nite element model (FE) of an uncemented total hip replacement (UTHR). The implant and bone stiness, load magnitude and geometry, and implant version angle were included as random variables and a reliable strain based performance indicator was adopted. The sensitivity results highlighted the bone stiness, implant version and load magnitude as the most sensitive parameters. The FE model was developed further to include the main muscle forces, and to consider fully bonded and frictional interface conditions. Three proximal femurs and two implants (one with a short and another with a long stem) were analysed. Dierent boundary conditions were compared, and convergence was improved when the distal portion of the implant was constrained and a frictional interface was employed. This was particularly true when looking at the maximum nodal micromotion. The micromotion results compared well with previous studies, conrming the reliability and accuracy of the probabilistic nite element model (PFEM). Results were often in uenced by the bone, suggesting that variability in bone features should be included in any probabilistic analysis of the implanted construct. This study achieved the aim of developing a probabilistic nite element tool for the analysis of nite element models of uncemented hip replacements and forms a good basis for probabilistic models of constructs subject to implant position related variability.
338

On self organising cyberdynamic policy

Evans, M. R. January 2017 (has links)
The de facto model of what it means to be effectively organised, hence cybernetically viable, is Stafford Beer’s Viable System Model (VSM). Many studies attest to the efficacy of what the VSM proposes, however, these appear to be largely confined to human based organisations of particular types e.g. businesses of assorted sizes and governmental matters. The original contribution to the body of knowledge that this work makes, in contrast, has come from an unconventional source i.e. football (soccer) teams. The unique opportunity identified was to use the vast amounts of football player spatial data, as captured by match scanning technology, to obtain simultaneously the multi-recursive policy characteristics of a real viable system operating in real time under highly dynamical load (threat/opportunity) conditions. It accomplishes this by considering player movement as being representative of the output of the policy function of the viable system model that they, hence their whole team, are each mapped to. As each player decides what they must do at any moment, or might need to do in the immediate future, this is set against their capabilities to deliver against that. This can be said of every player during every stage of any match. As such, their actions (their policies as viable systems) inform, and are informed by, the actions of others. This results in the teams of players behaving in a self-organising manner. Accordingly, in spatially varying player location, one has a single metric that characterises player, hence team function, and ultimately whole team policy as the policy of a viable system, that is amenable to analysis. A key behavioural characteristic of a self-organising system is a power law. Accordingly, by searching for, and obtaining, a power law associated with player movement one thereby obtains the output of the policy function of that whole team as a viable system, and hence the viable system model that the team maps to. At the heart of such activity is communication between the players as they proceed to do what they need to do at any given time during a match. This has offered another unique opportunity to measure the amount of spatially underpinned Information exhibited by the opposing teams in their entirety and to set those in juxtaposition with their respective power law characteristics and associated match outcomes. This meant that the power law characteristic that represents the policy of the viable system, and the amount of Information associated with that could be, and was, examined in the context of success or failure outcomes (as criteria of viability) to discern if some combinations of both were more profitable than not. This was accomplished in this work by using player position data from an anonymous member of the English Premier Football League playing in an unknown season to provide a quantitative analysis accordingly.
339

Contributions to big geospatial data rendering and visualisations

Tully, D. A. January 2017 (has links)
Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques.
340

Impact of learner control on learning in adaptable and personalised e-learning environments

Mustafa, Alan January 2011 (has links)
The purpose of this thesis is to investigate the impact of learners‟ measure of control over their learning, while working in different online learning environments, and how this, in combination with a structured learning material selection according to their learning preferences, can affect their learning performance. A qualitative study was carried out on the understanding of different learning philosophies, different learning environments and different learning preferences, in correlation with learners‟ measure of control over their learning environments, in terms of their influence on their learning performance. The research commenced with a survey of UK Higher Educational institutions, to determine the usage of adaptive e-learning systems in UK HE and the type and nature of the systems in use, which in combination with the literature review enabled the clarification of the research hypothesis and objectives. Since a measurement of learners‟ learning performance was needed, an adaptable personalised e-learning system (ALPELS) was developed to create an environment where a qualitative measurement could be done. Experimental data was then gathered from two cohorts of MSc students over two semesters, who used the newly designed and developed online learning environment. The successful implementation of the project has produced a large amount of data, which demonstrates a correlation between i) adaptable and personalised e-learning systems, and ii) learners‟ learning styles (which in itself supports the behaviouristic approach towards this type of online learning environment – ALPELS). The study indicates a dependency between an online controlled learning environment and learners‟ learning performances, showing that a personalised e-learning system (PELS) would be supportive of recall (R) and understanding (U) types of content materials (with an indication of 4.89%), but also demonstrating an increase in student learning performance in an adaptable e-learning system (ALELS) while using competency (C) types of content materials (with an indication of 5.43%). These outcomes provide a basis for future design of e-learning systems, utilising different models of learner control based on underpinning educational philosophies, in combination with learning preferences, to structure and present learning content according to type.

Page generated in 0.0885 seconds