Spelling suggestions: "subject:"computational amodelling"" "subject:"computational bmodelling""
11 |
Topics in Forest Product Modelling: The Economics of Bioenergy Product Exports from ForestsJohnston, Craig M. T. 06 November 2014 (has links)
As many countries turn to biomass for energy production to combat climate change, the effects on the global forest products industry remains for the most part, unknown. Although the individual studies of this thesis stand on their own, the results share a common theme of examining economic issues surrounding a greater reliance on energy derived from forests.
Chapter 1 presents the development and application of a non-linear programming model of global forest product trade used to assess the economic impact of an increase in global bioenergy demand. The results of the study indicate that increased global bioenergy demand will result in increased production of lumber and plywood, but outputs for fibreboard, particleboard and pulp will decline. In addition, renewable energy policies promoting bioenergy cause wood pellet prices to rise which could undermine the effectiveness of such policies.
The European Union (EU) has implemented the most aggressive renewable energy policies in the world, and as a result, has quickly become a global leader in bioenergy production. To meet their targets, the EU is expected to import an unprecedented amount of fibre from timber rich regions, causing ripple effects throughout the global forest products industry. Chapter 2 discusses such EU policies, utilizing the developed global forest products trade model. Results indicate increased EU bioenergy demand is welfare enhancing to the global forest products industry as a whole, although there are winners and losers.
Chapter 3 presents another important issue regarding increased bioenergy demand, that is, the supply of fibre is a limiting factor for its viability as an energy source. The chapter discusses the development and application of an electrical grid model of Alberta that is linked to a fibre transportation model of Alberta and British Columbia. Results show that proximity to a wood pellet producer is critical in the economic viability of retrofitting coal-fired power plants to co-fire with biomass.
Finally, the increasing reliance on bioenergy as a fossil fuel substitute depends critically on the acceptance that CO2 release associated with combustion is offset by the re-growth of the forest. Chapter 4 provides a discussion of this issue, sighting the significance of the timeline in CO2 release and absorption. If we deem climate change an urgent matter, we may give more weight to current reductions in atmospheric CO2, eroding the carbon neutrality of biomass. / Graduate / 0501 / 0503
|
12 |
Development of an integrated computational tool for modelling structural frames in fire considering local effectsJiang, Liming January 2016 (has links)
In terms of developing knowledge to enable more effective use of performance based engineering (PBE), one of the key limitations is the lack of an easy to use integrated computational tool that is also robust and comprehensive enough to enable automated modelling of more realistic fire scenarios, i.e., the structural response to localised or travelling fires. The main objective of this thesis is to establish such an integrated computational tool, which shall be based on the OpenSees software framework and facilitated by specially developed approaches to achieve higher efficiency of the integrated analysis. This includes the analysis of heat transfer from the fire to structural members, as well as the analysis of structural response to elevated temperatures during the fire. In this thesis, the research begins with the investigation of the feasibility of dimensional reduction for heat transfer analyses of structural members subjected to localised fire action (SFPE and Eurocode 1 fire models), which can be numerically represented by a linear or exponential correlation between incident heat flux and radial distance. Accurate estimates of the error induced by dimensional reduction are presented under strongly varying localised heat fluxes that represent the most non-uniform fire conditions in a building compartment. It is shown that beams and slabs can be adequately modelled with a lower dimensional heat transfer analysis for ordinary building fires. Using this approach, the complexity of heat transfer modelling and the required computing resource and user effort can both be significantly reduced, especially in cases where structural members are subjected to localised fire action. Thermo-mechanical simulations are presented to address the behaviour of structural members subjected to localised fire action, for which a ThermalAction- Wrapper is developed to approximate the temperature distribution from a mixed-order interpolation between sections (beam) or locations (slab). For concrete slabs subjected to localised fire, MITC4 based shell elements are used to account for material and geometric nonlinearities. An integrated simulation environment is developed, which is designed to be a computational tool that requires limited input but provides a comprehensive solution to the problem of simulating large structural frame and sub-frame response under realistic fire scenarios. A considerable amount of code has been written to create and operate the building model, and to process the heat fluxes from the design fires to the structure and the consequential structural response to the evolution of temperatures within it. Parametric studies have been performed to investigate the computational performance of the newly developed elements in modelling beams and slabs subjected to different cases of localised fire action. The results suggest that 3 to 6 force-based beam elements can adequately describe the localised response however more elements are required for quadratic distribution of incident heat flux and higher temperatures, which is due to the degradation of material strength that governs the accuracy especially when the members are heavily loaded. For slabs exposed to localised fires, centre fires are found to produce greater deflections than corner fires, while lateral restraints applied to the slabs may also lead to higher deflections. A small-scale three dimensional structural frame is modelled as a demonstration of the tool, tested against a number of localised fire scenarios. The global behaviour of the structure with the local effects induced by the fire action and partially damaged fire protection are investigated. Severe damage can be found in the members exposed to a single whole compartment fire, in contrast with the relatively small deflections that are observed when a fully protected column is engulfed by a localised fire. However if the passive fire protection is partially damaged, collapse may occur in the column as a result of load magnification because of the redistribution. To the author's knowledge this is the first piece of research that has been able to develop a practically feasible approach to enable efficient coupled computation of the response of structural frames to realistic fire scenarios on a freely available open source software platform. Currently this kind of analysis can only be carried out by just two or three large consulting firms because of the prohibitive commitment of analyst time and effort and to a lesser extent the need for significant computing resources. The work of this thesis will contribute enormously towards making high-end performance based engineering of structural fire resistance a much more practical proposition for small and medium size structural consultancies. Furthermore, the choice of OpenSees, which is a very well respected software framework for simulating structural response to earthquakes naturally enables this work to be extended to the simulating the multi-hazard structural resistance, such as in the event of a fire following an earthquake which may have locally damaged passive fire protection.
|
13 |
A clustering model for item selection in visual searchMcIlhagga, William H. January 2013 (has links)
No / In visual search experiments, the subject looks for a target item in a display containing different distractor items. The reaction time (RT) to find the target is measured as a function of the number of distractors (set size). RT is either constant, or increases linearly, with set size. Here we suggest a two-stage model for search in which items are first selected and then recognized. The selection process is modeled by (a) grouping items into a hierarchical cluster tree, in which each cluster node contains a list of all the features of items in the cluster, called the object file, and (b) recursively searching the tree by comparing target features to the cluster object file to quickly determine whether the cluster could contain the target. This model is able to account for both constant and linear RT versus set size functions. In addition, it provides a simple and accurate account of conjunction searches (e.g., looking for a red N among red Os and green Ns), in particular the variation in search rate as the distractor ratio is varied.
|
14 |
Modelling a Haptic Categorization Task using Bayesian InferenceArthur, Grace January 2024 (has links)
We rely heavily on our sense of touch to complete a myriad of tasks each day, yet past research focuses heavily on the visual and auditory systems, rarely concentrating on the tactile system. In the current study, we investigate human performance on a haptic categorization task and ask: what strategy do humans use to sense, interpret, and categorize objects using their sense of touch? During the experiment, participants complete 810 trials on which they receive a 3D printed object and categorize it as belonging to Category A or B. We sample the objects from a set of 25 objects, each of which differs in number of sides and dot spacing on one face. We define Categories A and B using overlapping Gaussian distributions, where Category A objects generally have fewer sides and smaller dot spacing, while Category B objects generally have more sides and larger dot spacing. Participants begin with no knowledge of the categories and learn them using feedback provided on each trial. We compared human performance to a Feature-Focused Bayesian Observer that weights the sides and dots feature information based on their reliability. It combines information from one or both features to inform a final percept and categorize each object. Our results support the hypothesis that humans employ a feature-focused categorization strategy on this task, during which they learn the categories and consider one or both of an object’s features based on their reliability. As participants complete more trials, they appear to maintain or switch to more optimal categorization strategies. Video analysis of hand movements during the experiment strongly supports these findings. / Thesis / Master of Science (MSc) / We use our senses every day to accomplish numerous categorization tasks: categorizing footsteps as originating from an ‘intruder’ or a ‘family member’, a distant animal as a ‘coyote’ or a ‘dog’, a writing utensil as a ‘pen’ or a ‘pencil’, and so on. Despite performing countless categorization tasks each day, we often overlook their complexity. Our research investigates the processing behind these tasks, specifically those tasks completed using the sense of touch. We conclude that people combine the most reliable information from their environments to determine the identity of an unknown object or stimulus. Moving forward, we can apply this deepened understanding of tactile processing to advance research in special populations and robotic applications.
|
15 |
Functional neuroanatomy of action selection in schizophreniaRomaniuk, Liana January 2011 (has links)
Schizophrenia remains an enigmatic disorder with unclear neuropathology. Recent advances in neuroimaging and genetic research suggest alterations in glutamate-dopamine interactions adversely affecting synaptic plasticity both intracortically and subcortically. Relating these changes to the manifestation of symptoms presents a great challenge, requiring a constrained framework to capture the most salient elements. Here, a biologically-grounded computational model of basal ganglia-mediated action selection was used to explore two pathological processes that hypothetically underpin schizophrenia. These were a drop in the efficiency of cortical transmission, reducing both the signal-to-noise ratio (SNR) and overall activity levels; and an excessive compensatory upregulation of subcortical dopamine release. It was proposed that reduced cortical efficiency was the primary process, which led to a secondary disinhibition of subcortical dopamine release within the striatum. This compensation was believed to partly recover lost function, but could then induce disorganised-type symptoms - summarised as selection ”Instability” - if it became too pronounced. This overcompensation was argued to be countered by antipsychotic medication. The model’s validity was tested during an fMRI (functional magnetic resonance imaging) study of 16 healthy volunteers, using a novel perceptual decision-making task, and was found to provide a good account for pallidal activation. Its account for striatum was developed and improved with a small number of principled model modifications: the inclusion of fast spiking interneurons within striatum, and their inhibition by the basal ganglia’s key regulatory nucleus, external globus pallidus. A key final addition was the explicit modelling of dopaminergic midbrain, which is dynamically regulated by both cortex and the basal ganglia. This enabled hypotheses concerning the effects of cortical inefficiency, compensatory dopamine release and medication to be directly tested. The new model was verified with a second set of 12 healthy controls. Its pathological predictions were compared to data from 12 patients with schizophrenia. Model simulations suggested that Instability went hand-in-hand with cortical inefficiency and secondary dopamine upregulation. Patients with high Instability scores showed a loss of SNR within decision-related cortex (consistent with cortical inefficiency); an exaggerated response to task demands within substantia nigra (consistent with dopaminergic upregulation); and had an improved fit to simulated data derived from increasingly cortically-inefficient models. Simulations representing the healthy state provided a good account for patients’ motor putamen, but only cortically-inefficient simulations representing the ill state provided a fit for ventral-anterior striatum. This fit improved as the simulated model became more medicated (increased D2 receptor blockade). The relative improvement of this account correlated with patients’ medication dosage. In summary, by distilling the hypothetical neuropathology of schizophrenia into two simplified umbrella processes, and using a computational model to consider their effects within action selection, this work has successfully related patients’ fMRI activation to particular symptomatology and antipsychotic medication. This approach has the potential to improve patient care by enabling a neurobiological appreciation of their current illness state, and tailoring their medication level appropriately.
|
16 |
Computational modelling of concrete footing rotational rigidityFraser, Elsje S. 12 1900 (has links)
Thesis (MScEng (Civil Engineering))--Stellenbosch University, 2008.
|
17 |
Role of Semantics in the Reconsolidation of Episodic MemoriesKumar, Shikhar January 2012 (has links)
Evidence suggests that when memories are reactivated they become labile and can be updated or even erased. Reactivation induces plasticity in memory representations, rendering them fragile, much as they were after initial acquisition. When a memory has been reactivated it must be re-stabilized, which requires reconsolidation. A recent set of studies established the phenomenon of memory reconsolidation for episodic memory (Hupbach et al., 2007, 2008, 2011). That reconsolidation effects apply to explicit memory, which requires conscious recollection, has far reaching implications. In the Hupbach et al. studies the ability of subtle reminders to trigger reconsolidation was investigated; these reminders consisted of the same spatial context, the same experimenter and a reminder question. Given we live in a predictable world, episodes are not random occurrences of events in time and space, but instead consist of statistical and semantic regularities. This leaves open the question of whether semantic relations and statistical regularities between episodes can trigger a reactivation of episodic memory. If so, how would this affect the status of the reactivated memory? This dissertation explored the role of semantic relatedness between the elements of different episodes in memory reactivation and subsequent updating. We focused particularly on categorical and contextual aspects of semantic relations. A series of experiments considered different kinds of semantic relations between elements of episodes, providing evidence of memory reactivation and updating as a consequence of basic level category relations between items in two separate episodes. We also tested the predictions of the Temporal Context Model (TCM) (Sederberg et al., 2011) for our experimental paradigm and show that the current TCM model is not able to account for all the effects of semantic relatedness in the reconsolidation paradigm. Finally, we explore an alternative approach that seeks to explain memory reconsolidation as Bayesian Inference. Our results provide support for this Bayesian framework, showing the potential of it for exploring different aspects of memory organization.
|
18 |
An associative approach to task switchingForrest, Charlotte Louise January 2012 (has links)
This thesis explores the behaviour of participants taking an associative approach to a task-cueing paradigm. Task-cueing is usually intended to explore controlled processing of task-sets. But small stimulus sets plausibly afford associative learning via simple and conditional discriminations. In six experiments participants were presented with typical task-cueing trials: a cue (coloured shape) followed by a digit (or in Experiment 5 a symbol) requiring one of two responses. In the standard Tasks condition (Monsell Experiment and Experiments 1-3), the participant was instructed to perform either an odd/even or a high/low task dependent on the cue. The second condition was intended to induce associative learning of cue + stimulus-response mappings. In general, the Tasks condition showed a large switch cost that reduced with preparation time, a small, constant congruency effect and a small perturbation when new stimuli were introduced. By contrast the CSR condition showed a small, reliable switch cost that did not reduce with preparation time, a large congruency effect that changed over time and a large perturbation when new stimuli were introduced. These differences may indicate automatic associative processing in the CSR condition and rule-based classification in the Tasks condition. Furthermore, an associative model based on the APECS learning algorithm (McLaren, 1993) provided an account of the CSR data. Experiment 3 showed that participants were able to deliberately change their approach to the experiment from using CSR instructions to using Tasks instructions, and to some extent vice versa. Experiments 4 & 5 explored the cause of the small switch cost in the CSR condition. Consideration of the aspects of the paradigm that produced the switch cost in the APECS model produced predictions, which were tested against behavioural data. Experiment 4 found that the resulting manipulation made participants more likely to induce task-sets. Experiment 5 used random symbols instead of numbers, removing the underlying task-sets. The results of this experiment broadly agreed with the predictions made using APECS. Chapter 6 considers an initial attempt to create a real-time version of APECS. It also finds that an associative model of a different class (AMAN, Harris & Livesey, 2010) can provide an account of some, but not all, of the phenomena found in the CSR condition. This thesis concludes that performance in the Tasks condition is suggestive of the use of cognitive control processes, whilst associatively based responding is available as a basis for performance in the CSR condition.
|
19 |
In silico modelling of tumour-induced angiogenesisConnor, Anthony J. January 2014 (has links)
Angiogenesis, the process by which new vessels form from existing ones, is a key event in the development of a large and malignant vascularised tumour. A rapid expansion in in vivo and in vitro angiogenesis research in recent years has led to increased knowledge about the processes underlying angiogenesis and to promising steps forward in the development of anti-angiogenic therapies for the treatment of various cancers. However, substantial gaps in knowledge persist and the development of effective treatments remains a major challenge. In this thesis we study tumour-induced angiogenesis within the context of a highly controllable experimental environment: the cornea micropocket assay. Using a multidisciplinary approach that combines experiments, image processing and analysis, and mathematical and computational modelling, we aim to provide mechanistic insight into the action of two angiogenic factors which are known to play central roles during tumour-induced angiogenesis: vascular endothelial growth factor A (VEGF-A) and basic fibroblast growth factor (bFGF). Image analysis techniques are used to extract quantitative data, which are both spatially and temporally resolved, from experimental images. These data are then used to develop and parametrise mathematical models describing the evolution of the corneal vasculature in response to both VEGF-A and bFGF. The first models developed in this thesis are one-dimensional continuum models of angiogenesis in which VEGF-A and/or bFGF are released from a pellet implanted into a mouse cornea. We also use an object-oriented framework, designed to facilitate the re-use and extensibility of hybrid multiscale models of angiogenesis and vascular tumour growth, to develop a complementary three-dimensional hybrid model of the same system. The hybrid model incorporates a new non-local cell sensing model which facilitates the formation of well-perfused and functional vascular networks in three dimensions. The continuum models are used to assess the utility of the cornea micropocket assay as a quantitative assay for angiogenesis, to characterise proposed synergies between VEGF-A and bFGF, and to generate experimentally testable predictions regarding the effect of anti-VEGF-A therapies on bFGF-induced angiogenesis. Meanwhile, the hybrid model is used to provide context for the comparison that is drawn between the continuum models and the data, to study the relative distributions of perfused and unperfused vessels in the evolving neovasculature, and to investigate the impact of tip cell sensing dysregulation on the angiogenic response in the cornea. We have found that by exploiting a close link with quantitative data we have been able to extend the predictive and hypothesis-testing capabilities of our models. As such, this thesis demonstrates the potential for integrating mathematical modelling with image analysis techniques to increase insight into the mechanisms underlying angiogenesis.
|
20 |
A systems biology approach to multi-scale modelling and analysis of planar cell polarity in Drosophila melanogaster wingGao, Qian January 2013 (has links)
Systems biology aims to describe and understand biology at a global scale where biological systems function as a result of complex mechanisms that happen at several scales. Modelling and simulation are computational tools that are invaluable for description, understanding and prediction these mechanisms in a quantitative and integrative way. Thus multi-scale methods that couple the design, simulation and analysis of models spanning several spatial and temporal scales is becoming a new emerging focus of systems biology. This thesis uses an exemplar – Planar cell polarity (PCP) signalling – to illustrate a generic approach to model biological systems at different spatial scales, using the new concept of Hierarchically Coloured Petri Nets (HCPN). PCP signalling refers to the coordinated polarisation of cells within the plane of various epithelial tissues to generate sub-cellular asymmetry along an axis orthogonal to their apical-basal axes. This polarisation is required for many developmental events in both vertebrates and non-vertebrates. Defects in PCP in vertebrates are responsible for developmental abnormalities in multiple tissues including the neural tube, the kidney and the inner ear. In Drosophila wing, PCP is seen in the parallel orientation of hairs that protrude from each of the approximately 30,000 epithelial cells to robustly point toward the wing tip. This work applies HCPN to model a tissue comprising multiple cells hexagonally packed in a honeycomb formation in order to describe the phenomenon of Planar Cell Polarity (PCP) in Drosophila wing. HCPN facilitate the construction of mathematically tractable, compact and parameterised large-scale models. Different levels of abstraction that can be used in order to simplify such a complex system are first illustrated. The PCP system is first represented at an abstract level without modelling details of the cell. Each cell is then sub-divided into seven virtual compartments with adjacent cells being coupled via the formation of intercellular complexes. A more detailed model is later developed, describing the intra- and inter-cellular signalling mechanisms involved in PCP signalling. The initial model is for a wild-type organism, and then a family of related models, permitting different hypotheses to be explored regarding the mechanisms underlying PCP, are constructed. Among them, the largest model consists of 800 cells which when unfolded yields 164,000 places (each of which is described by an ordinary differential equation). This thesis illustrates the power and validity of the approach by showing how the models can be easily adapted to describe well-documented genetic mutations in the Drosophila wing using the proposed approach including clustering and model checking over time series of primary and secondary data, which can be employed to analyse and check such multi-scale models similar to the case of PCP. The HCPN models support the interpretation of biological observations reported in literature and are able to make sensible predictions. As HCPN model multi-scale systems in a compact, parameterised and scalable way, this modelling approach can be applied to other large-scale or multi-scale systems.
|
Page generated in 0.1452 seconds