Spelling suggestions: "subject:"computational codels"" "subject:"computational 2models""
1 |
Computational Models of Cerebral HemodynamicsAlzaidi, Samara Samir January 2009 (has links)
The cerebral tissue requires a constant supply of oxygen and nutrients. This is maintained through delivering a constant supply of blood. The delivery of sufficient blood is preserved by the cerebral vasculature and its autoregulatory function. The cerebral vasculature is composed of the Circle of Willis (CoW), a ring-like anastomoses of arteries at the base of the brain, and its peripheral arteries. However, only 50% of the population have a classical complete CoW network. This implies that the route of blood flow through the cerebral vasculature is different and dependent on where the blood is needed most in the brain. Autoregulation is a mechanism held by the peripheral arteries and arterioles downstream of the CoW. It ensures the delivery of the essential amount of cerebral blood flow despite changes in the arterial perfusion pressure, through the vasoconstriction and vasodilation of the vessels. The mechanisms that control the vessels’ vasomotion could be attributed to myogenic, metabolic, neurogenic regulation or a combination of all three. However, the variations in the CoW structure, combined with different pathological conditions such as hypertension, a stenosis or an occlusion in one or more of the supplying cerebral arteries may alter, damage or abolish autoregulation, and consequently result in a stroke. Stroke is the most common cerebrovascular disease that affects millions of people in the world every year. Therefore, it is essential to understand the cerebral hemodynamics via mathematical modelling of the cerebral vasculature and its regulation mechanisms. This thesis presents the developed model of the cerebral vasculature coupled with the different forms of autoregulation mechanisms. The model was developed over multiple stages. First, a linear model of the CoW was developed, where the peripheral vessels downstream of the CoW efferent arteries are represented as lumped parameter variable resistances. The autoregulation function in the efferent arteries was modelled using a PI controller, and a metabolic model was added to the lumped peripheral variable resistances. The model was then modified so the pressure losses encountered at the CoW bifurcations, and the vessels’ tortuosity are taken into account resulting in a non-linear system. A number of cerebral autoregulation models exist in the literature, however, no model combines a fully populated arterial tree with dynamic autoregulation. The final model presented in this thesis was built by creating an asymmetric binary arterial vascular tree to replace the lumped resistance parameters for the vasculature network downstream of each of the CoW efferent arteries. The autoregulation function was introduced to the binary arterial tree by implementing the myogenic and metabolic mechanisms which are active in the small arteries and arterioles of the binary arterial tree. The myogenic and metabolic regulation mechanisms were both tested in the model. The results indicate that because of the low pressures experienced by the arterioles downstream of the arterial tree, the myogenic mechanism, which is hypothesised by multiple researchers as the main driver of autoregulation, does not provide enough regulation of the arterioles’ diameters to support autoregulation. The metabolic model showed that it can provide sufficient changes in the arterioles’ diameters, which produces a vascular resistance that support the constancy of the autoregulation function. The work carried out for this research has the potential of being a significant clinical tool to evaluate patient-specific cases when combined with the graphical user interfaces provided. The research and modelling performed was done as part of the Brain Group of the Centre of Bioengineering at the University of Canterbury.
|
2 |
Optimizing Non-pharmaceutical Interventions Using Multi-coaffiliation NetworksLoza, Olivia G. 05 1900 (has links)
Computational modeling is of fundamental significance in mapping possible disease spread, and designing strategies for its mitigation. Conventional contact networks implement the simulation of interactions as random occurrences, presenting public health bodies with a difficult trade off between a realistic model granularity and robust design of intervention strategies. Recently, researchers have been investigating the use of agent-based models (ABMs) to embrace the complexity of real world interactions. At the same time, theoretical approaches provide epidemiologists with general optimization models in which demographics are intrinsically simplified. The emerging study of affiliation networks and co-affiliation networks provide an alternative to such trade off. Co-affiliation networks maintain the realism innate to ABMs while reducing the complexity of contact networks into distinctively smaller k-partite graphs, were each partition represent a dimension of the social model. This dissertation studies the optimization of intervention strategies for infectious diseases, mainly distributed in school systems. First, concepts of synthetic populations and affiliation networks are extended to propose a modified algorithm for the synthetic reconstruction of populations. Second, the definition of multi-coaffiliation networks is presented as the main social model in which risk is quantified and evaluated, thereby obtaining vulnerability indications for each school in the system. Finally, maximization of the mitigation coverage and minimization of the overall cost of intervention strategies are proposed and compared, based on centrality measures.
|
3 |
A theoretical framework for computer models of cooperative dialogue, acknowledging multi-agent conflictGalliers, J. R. January 1988 (has links)
This thesis describes a theoretical framework for modelling cooperative dialogue. The linguistic theory is a version of speech act theory adopted from Cohen and Levesque, in which dialogue utterances are generated and interpreted pragmatically in the context of a theory of rational interaction. The latter is expressed as explicitly and formally represented principles of rational agenthood and cooperative interaction. The focus is the development of strategic principles of multi-agent interaction as such a basis for cooperative dialogue. In contrast to the majority of existing work, these acknowledge the Positive role of conflict to multi-agent cooperation. and make no assumptions regarding the benevolence and sincerity of agents. The result is a framework wherein agents can resolve conflicts by negotiation. It is a preliminary stage to the future building of computer models of cooperative dialogue for both HCI and DAI, which will therefore be more widely and generally applicable than those currently in existence. The theory of conflict and cooperation is expressed in the different patterns of mental states which characterise multi-agent conflict, cooperation and indifference as three alternative postural relations. Agents can recognise and potentially create these. Dialogue actions are the strategic tools with which mental states can be manipulated, whilst acknowledging that agents are autonomous over their mental states; they have control over what they acquire and reveal in dialogue. Strategic principles of belief and goal adoption are described in terms of the relationships between autonomous agents' beliefs, goals, preferences, and interests, and the relation of these to action. Veracity, mendacity, concealing and revealing are defined as properties of acts. The role of all these elements in reasoning about dialogue action and conflict resolution, is tester in analyses of two example dialogues; a record of a real trade union negotiation and an extract from "Othello" by Shakespeare.
|
4 |
Exploring scaling limits and computational paradigms for next generation embedded systemsZykov, Andrey V. 01 June 2010 (has links)
It is widely recognized that device and interconnect fabrics at the nanoscale
will be characterized by a higher density of permanent defects and increased susceptibility
to transient faults. This appears to be intrinsic to nanoscale regimes
and fundamentally limits the eventual benefits of the increased device density, i.e.,
the overheads associated with achieving fault-tolerance may counter the benefits
of increased device density -- density-reliability tradeoff. At the same time, as devices
scale down one can expect a higher proportion of area to be associated with
interconnection, i.e., area is wire dominated. In this work we theoretically explore
density-reliability tradeoffs in wire dominated integrated systems. We derive an area
scaling model based on simple assumptions capturing the salient features of hierarchical
design for high performance systems, along with first order assumptions on
reliability, wire area, and wire length across hierarchical levels. We then evaluate
overheads associated with using basic fault-tolerance techniques at different levels
of the design hierarchy. This, albeit simplified model, allows us to tackle several
interesting theoretical questions: (1) When does it make sense to use smaller less
reliable devices? (2) At what scale of the design hierarchy should fault tolerance be applied in high performance integrated systems? In the second part of this thesis we explore perturbation-based computational
models as a promising choice for implementing next generation ubiquitous
information technology on unreliable nanotechnologies. We show the inherent robustness
of such computational models to high defect densities and performance
uncertainty which, when combined with low manufacturing precision requirements,
makes them particularly suitable for emerging nanoelectronics. We propose a hybrid
eNano-CMOS perturbation-based computing platform relying on a new style
of configurability that exploits the computational model's unique form of unstructured
redundancy. We consider the practicality and scalability of perturbation-based
computational models by developing and assessing initial foundations for engineering
such systems. Specifically, new design and decomposition principles exploiting
task specific contextual and temporal scales are proposed and shown to substantially
reduce complexity for several benchmark tasks. Our results provide strong evidence
for the relevance and potential of this class of computational models when targeted
at emerging unreliable nanoelectronics. / text
|
5 |
Strategies of Lithography for Trapping Nano-particlesRajter, Rick 01 1900 (has links)
Current research in materials science and engineering continues to drive it's attention to systems on the nanoscale. Thin films, nano-particles, quantum dots, nano-wires, etc are just a few of the areas that are becoming important in projects ranging from biomedical transport to nano-gears. Thus, understanding, producing, and creating these system is also becoming an important challenge for scientists and engineers to overcome. Physically manipulating objects on the atomic scale requires more than just "micro tweezers" to arrange them in a particular system. Another concern is that forces and interactions that could be ignored or approximated at larger scales no longer hold in this regime. It is the goal of this project to use computational models to simulate nano-particles interacting with customized, highly tailored surfaces in order to confine and pattern them to desired specifications. The interactions to be considered include electrostatic attraction and repulsion, hamaker forces, steric effects, dielectric effects of the medium, statistical variability, mechanical induced surface vibrations, etc. The goal is to be able to manufacture such systems for experimentation in order to compare results to the models. If the models do not hold, we hope to understand the origin of these discrepancies in order to create more robust models for this length scale. Lithography, CVD, and chemical etching will be the primary methods used to create these surfaces on glass substrates. TEM analysis will be compared to modeling through various MD program packages. / Singapore-MIT Alliance (SMA)
|
6 |
Reading Aloud: Feedback is Never NecessaryRobidoux, Serje Marc January 2010 (has links)
Since McClelland and Rumelhart (1981) introduced the concept of interactive activation (IA) to the field of visual word recognition, IA has been adopted by all of the major theoretical models of reading aloud. This widespread adoption of IA has not been met with a close examination of the need for the principle features of this processing approach. In particular, IA assumes feedback from later processing modules to earlier processing modules. Though there exist data that can be explained by such feedback mechanisms, and indeed IA may be an intuitive approach to complex tasks like reading, little effort has been made to explain these same phenomena without feedback. In the present study I apply Occam’s razor to the most successful model of reading aloud (CDP+; Perry, Ziegler, & Zorzi, 2007) and test whether feedback is needed to simulate any of the benchmark phenomena identified by Perry et al. (2007) and Coltheart, Rastle, Perry, Langdon and Ziegler (2001). I find that the data currently do not require any feedback mechanisms in reading aloud, and thus conclude that modelers in reading aloud have been too quick to adopt the principles of IA.
|
7 |
The role of uncertainty and reward on eye movements in natural tasksSullivan, Brian Thomas 18 July 2012 (has links)
The human visual system is remarkable for the variety of functions it can be used for and the range of conditions under which it can perform, from the detection of small brightness changes to guiding actions in complex movements. The human eye is foveated and humans continually make eye and body movements to acquire new visual information. The mechanisms that control this acquisition and the associated sequencing of eye movements in natural circumstances are not well understood.
While the visual system has highly parallel inputs, the fovea must be moved in a serial fashion. A decision process continually occurs where peripheral information is evaluated and a subsequent fixation target is selected. Prior explanations for fixation selection have largely focused on computer vision algorithms that find image areas with high salience, ones that incorporate reduction of uncertainty or entropy of visual features, as well as heuristic models.
However, these methods are not well suited to model natural circumstances where humans are mobile and eye movements are closely coordinated for gathering ongoing task information. Following a computational model of gaze scheduling proposed by Sprague and Ballard (2004), I argue that a systematic explanation of human gaze behavior in complex natural tasks needs to represent task goals, a reward structure for these goals and a representation of uncertainty concerning progress towards those goals. If these variables are represented it is possible to formulate a decision computation for choosing fixation targets based on an expected value from uncertainty weighted reward.
I present two studies of human gaze behavior in a simulated driving task that provide evidence of the human visual system’s sensitivity to uncertainty and reward. In these experiments observers tended to more closely monitor an information source if it had a high level of uncertainty but only for information also associated with high reward. Given this behavioral finding, I then present a set of simple candidate models in an attempt to explain how humans schedule the acquisition of information over time. These simple models are shown to be inadequate in describing the process of coordinated information acquisition in driving. I present an extended version of the gaze scheduling model adapted to our particular driving task. This formulation allows ordinal predictions on how humans use reward and uncertainty in the control of eye movements and is generally consistent with observed human behavior.
I conclude by reviewing main results and discussing the merits and benefits of the computational models used, possible future behavioral experiments that would serve to more directly test the gaze scheduling model, as well as revisions to future implementations of the model to more appropriately capture human gaze behavior. / text
|
8 |
Reading Aloud: Feedback is Never NecessaryRobidoux, Serje Marc January 2010 (has links)
Since McClelland and Rumelhart (1981) introduced the concept of interactive activation (IA) to the field of visual word recognition, IA has been adopted by all of the major theoretical models of reading aloud. This widespread adoption of IA has not been met with a close examination of the need for the principle features of this processing approach. In particular, IA assumes feedback from later processing modules to earlier processing modules. Though there exist data that can be explained by such feedback mechanisms, and indeed IA may be an intuitive approach to complex tasks like reading, little effort has been made to explain these same phenomena without feedback. In the present study I apply Occam’s razor to the most successful model of reading aloud (CDP+; Perry, Ziegler, & Zorzi, 2007) and test whether feedback is needed to simulate any of the benchmark phenomena identified by Perry et al. (2007) and Coltheart, Rastle, Perry, Langdon and Ziegler (2001). I find that the data currently do not require any feedback mechanisms in reading aloud, and thus conclude that modelers in reading aloud have been too quick to adopt the principles of IA.
|
9 |
Mathematical and computer modelling of the enteric nervous systemThomas, Evan Alexander January 2001 (has links) (PDF)
The enteric nervous system (ENS) runs within the intestinal wall and is responsible for initiating and enacting several reflexes and motor patterns, including peristalsis and the complex interdigestive motor programs, known as migrating motor complexes (MMCs). The ENS consists of several neuron types including intrinsic sensory neurons, interneurons and motor neurons. A great deal is known about the anatomy, pharmacology and electrophysiology of the ENS, yet there is almost no understanding of how enteric neural circuits perform the functions that they do and how they switch from one function to another. The ENS contains intrinsic sensory neurons (ISNs) that connect to every neuron type in the ENS, including making recurrent connections amongst themselves. Thus, they are likely to play a key role, not just in sensory transduction, but in coordination of reflexes and motor patterns. This thesis has explored how these functions are performed by developing and analysing mathematical and computer models of the network of ISNs. (For complete abstract open document)
|
10 |
Comparing Protocell and Surface-Based Models of RNA Replicator Systems and Determining Favourable Conditions for Linkage of Functional Strands / Simulations of RNA Replicator SystemsShah, Vismay January 2019 (has links)
In hypothesized RNA-World scenarios, replication of RNA strands is catalyzed by error-prone polymerase ribozymes. Incorrect replication leads to the creation of non-functional, parasitic strands which can invade systems of replicators and lead to their death. Studies have shown two solutions to this problem: spatial clustering of polymerases in models featuring elements to limit diffusion, and group selection in models featuring protocells. Making a quantitative comparison of the methods using results from the literature has proven difficult due to differences in model design. Here we develop computational models of replication of a system of polymerases, polymerase complements and parasites in both spatial models and protocell models with near identical dynamics to make meaningful comparison viable. We compare the models in terms of the maximum mutation rate survivable by the system (the error threshold) as well as the minimum replication rate constant required. We find that protocell models are capable of sustaining much higher maximum mutation rates, and survive under much lower minimum replication rates than equivalent surface models. We then consider cases where parasites are favoured in replication, and show that the advantage of protocell models is increased. Given that a system of RNA strands undergoing catalytic replication by a polymerase is fairly survivable in protocell models, we attempt to determine whether isolated strands can develop into genomes. We extend our protocell model to include additional functional strands varying in length (and thus replication rate) and allow for the linkage of strands to form proto-chromosomes. We determine that linkage is possible over a broad range of lengths, and is stable when considering the joining of short functional strands to the polymerase (and the same for the complementary sequences). Moreover, linkage of short functional strands to the polymerase assures more cells remain viable post division by ensuing a good quantity of polymerase equivalents are present in the parent cell prior to splitting. / Thesis / Master of Science (MSc) / Collections of RNA polymers are good candidates for the origin of life. RNA is able to store genetic information and act as polymerase ribozymes allowing RNA to replicate RNA. Polymerases have been experimentally developed in labs, however none are sufficiently general to work well in an origins of life setting. These polymerases are vulnerable to mistakes during copying, making survival of RNA systems difficult. Such systems have been studied by computer simulations, showing that the strands need to be kept together for survival, either on surfaces or in primitive cells. Differences in the details of the models has made comparing the surfaces to cells difficult. This work creates a unified model base allowing for comparison of these two environments. We find that the existence of primitive cells is very beneficial to systems of RNA polymers and thus it is likely such cells existed at the origin of life.
|
Page generated in 0.0831 seconds