Spelling suggestions: "subject:"computational amodelling"" "subject:"computational bmodelling""
31 |
Light Scattering in Complex Mesoscale Systems: Modelling Optical Trapping and MicromachinesVincent Loke Unknown Date (has links)
Optical tweezers using highly focussed laser beams can be used to exert forces and torques and thus drive micromachines. This opens up a new field of microengineering, whose potential has yet to be fully realized. Until now, methods that have been used for modelling optical tweezers are limited to scatterers that are homogeneous or that have simple geometry. To aid in designing more general micromachines, I developed and implemented two main methods for modelling the micromachines that we use. These methods can be used for further proposed structures to be fabricated. The first is a FDFD/T-matrix hybrid method that incorporates the finite difference frequency domain (FDFD) method, which is used for inhomogeneous and anisotropic media, with vector spherical wave functions (VSWF) to formulate the T-matrix. The T-matrix is then used to calculate the torque of the trapped vaterite sphere, which is apparently composed of birefringent unit crystals but the bulk structure appears to be arranged in a sheaf-of-wheat fashion. The second method is formulating the T-matrix via discrete dipole approximation (DDA) of complex arbitrarily shaped mesoscale objects and implementing symmetry optimizations to allow calculations to be performed on high-end desktop PCs that are otherwise impractical due to memory requirements and calculation time. This method was applied to modelling microrotors. The T-matrix represents the scattering properties of an object for a given wavelength. Once it is calculated, subsequent calculations with different illumination conditions can be performed rapidly. This thesis also deals with studies of other light scattering phenomena including the modelling of scattered fields from protein molecules subsequently used to model FRET resonance, determining the limits of trappability, interferometric Brownian motion and the comparison between integral transforms by direct numerical integration and overdetermined point-matching.
|
32 |
Judgements of style: People, pigeons, and PicassoStephanie C. Goodhew Unknown Date (has links)
Judgements of and sensitivity to style are ubiquitous. People become sensitive to the structural regularities of complex or “polymorphous” categories through exposure to individual examples, which allows them respond to new items that are of the same style as those previously experienced. This thesis investigates whether a dimension reduction mechanism could account for how people learn about the structure of complex categories. That is, whether through experience, people extract the primary dimensions of variation in a category and use these to analyse and categorise subsequent instances. We used Singular Value Decomposition (SVD) as the method of dimension reduction, which yields the main dimensions of variation of pixel-based stimuli (eigenvectors). We then tested whether a simple autoassociative network could learn to distinguish paintings by Picasso and Braque which were reconstructed from only these primary dimensions of variation. The network could correctly classify the stimuli, and its performance was optimal with reconstructions based on just the first few eigenvectors. Then we reconstructed the paintings using either just the first 10 (early reconstructions) or all 1,894 eigenvectors (full reconstructions), and asked human participants to categorise the images. We found that people could categorise the images with either the early or full reconstructions. Therefore, people could learn to distinguish category membership based on the reduced set of dimensions obtained from SVD. This suggests that a dimension reduction mechanism analogous to SVD may be operating when people learn about the structure and regularities in complex categories.
|
33 |
Factors influencing the dispersal of Pseudomonas fluorescens NZI7 by Caenorhabditis elegansWilkins, Annekathrin January 2016 (has links)
Caenorhabditis elegans is a natural predator of the mushroom pathogen Pseudomonas fluorescens NZI7. The bacterial mechanisms for reducing predation by the nematode through the secretion of secondary metabolites have been described, but not yet fully explored. The behaviour of nematodes is influenced by the different factors produced by the pseudomonads. In this thesis we develop a range of assays to link the behaviour of C. elegans to these factors to identify their role in bacteria-nematode interactions. We show that these factors play two distinct roles: they may either repel nematodes, or harm them. This permits the classification of mutants of P. fl. NZI7 lacking these factors as either attractive, edible or both. Many studies of C. elegans behaviour have demonstrated that the nematode can distinguish between different food sources. Our results show two distinct types of response: chemotaxis drives the response to attractive or repellent stimuli, and nematodes also show a choice behaviour that is independent of chemotaxis. This choice behaviour is determined by bacterial edibility and requires nematodes to come into contact with the bacteria. This contact is the foundation of the bacterial dispersal by nematodes. By making use of the luminescence property of the available bacterial mutants, we demonstrate an intimate link between the behaviour of C. elegans and the success with which bacteria are disseminated: if nematodes are induced to regularly leave a bacterial colony, whether through their genotype or the low edibility of the food, then they will spread bacteria effectively. Throughout this thesis, we use computational simulations based on a hybrid cellular automaton model to represent the nematode-bacteria interactions. These simulations recreate the observed behaviour of the system, thus they help to confirm our hypotheses and establish the fundamental aspects of the interactions between the two species.
|
34 |
Application of magnetic resonance elastography to atherosclerosisThomas-Seale, Lauren Elizabeth Jane January 2015 (has links)
Atherosclerosis is the root cause of a wide range of cardiovascular diseases. Although it is a global arterial disease, some of the most severe consequences, heart attack and stroke, are caused by ischemia due to local plaque rupture. The risk of rupture is related to the mechanical properties of the plaque. Magnetic resonance elastography (MRE) images tissue elasticity by inverting, externally excited, harmonic wave displacement into a stiffness map, known as an elastogram. The aim of this thesis is to computationally and experimentally investigate the application of MRE to image the mechanical properties of atherosclerotic plaques. The cardiac cycle, lumen boundary, size and inhomogeneous nature of atherosclerotic plaques pose additional complications compared to more well-established MRE applications. Computational modelling allowed these complications to be assessed in a controlled and simplified environment, prior to experimental studies. Computational simulation of MRE was proposed by combining steady state shear waves, yielded by finite element analysis, with the 2D Helmholtz inversion algorithm. The accuracy and robustness of this technique was ascertained through models of homogeneous tissue. A computational sensitivity study was conducted through idealised atherosclerotic plaques, incorporating the effects of disease variables and mechanical, imaging and inversion parameters on the wave images and elastograms. Subject to parameter optimisation, a change in local plaque shear modulus with composition was established. Amongst other variables, an increase of the lipid pool volume in 10mm3 increments was shown to decrease the predicted shear modulus for stenosis sizes between 50% and 80%. The limitations of the Helmholtz inversion algorithm were demonstrated. A series of arterial phantoms containing plaques of various size and stiffness were developed to test the experimental feasibility of the technique. The lumen was identifiable in the wave images and elastograms. However the experimental wave propagation, noise and resolution left the vessel wall and plaque unresolvable. A computational replica of the phantoms yielded clearer wave images and elastograms, indicating that changes to the experimental procedure could lead to more successful results. The comparison also highlighted certain areas for improvement in the computational work. Imaging protocol for in vivo MRE through the peripheral arteries of healthy volunteers and peripheral artery disease patients was developed. The presence of physiological motion and low signal to noise ratios made the vessel anatomy unidentifiable. The application of MRE to atherosclerotic plaques through simulations, arterial phantoms, healthy volunteers and patients has shown that although there is the potential to identify a change in shear modulus with composition, the addition of realistic experimental complications are severely limiting to the technique. The gradual addition of complications throughout the thesis has allowed their impact to be assessed and in turn has highlighted areas for future research.
|
35 |
The quantification of perception based uncertainty using R-fuzzy sets and grey analysisKhuman, Arjab Singh January 2016 (has links)
The nature of uncertainty cannot be generically defined as it is domain and context specific. With that being the case, there have been several proposed models, all of which have their own associated benefits and shortcomings. From these models, it was decided that an R-fuzzy approach would provide for the most ideal foundation from which to enhance and expand upon. An R-fuzzy set can be seen as a relatively new model, one which itself is an extension to fuzzy set theory. It makes use of a lower and upper approximation bounding from rough set theory, which allows for the membership function of an R-fuzzy set to be that of a rough set. An R-fuzzy approach provides the means for one to encapsulate uncertain fuzzy membership values, based on a given abstract concept. If using the voting method, any fuzzy membership value contained within the lower approximation can be treated as an absolute truth. The fuzzy membership values which are contained within the upper approximation, may be the result of a singleton, or the vast majority, but absolutely not all. This thesis has brought about the creation of a significance measure, based on a variation of Bayes' theorem. One which enables the quantification of any contained fuzzy membership value within an R-fuzzy set. Such is the pairing of the significance measure and an R-fuzzy set, an intermediary bridge linking to that of a generalised type-2 fuzzy set can be achieved. Simply by inferencing from the returned degrees of significance, one is able to ascertain the true significance of any uncertain fuzzy membership value, relative to other encapsulated uncertain values. As an extension to this enhancement, the thesis has also brought about the novel introduction of grey analysis. By utilising the absolute degree of grey incidence, it provides one with the means to measure and quantify the metric spaces between sequences, generated based on the returned degrees of significance for any given R-fuzzy set. As it will be shown, this framework is ideally suited to domains where perceptions are being modelled, which may also contain several varying clusters of cohorts based on any number of correlations. These clusters can then be compared and contrasted to allow for a more detailed understanding of the abstractions being modelled.
|
36 |
Molecular and computational analysis of temperature compensation of the Neurospora crassa circadian clockValentine, Matthew January 2016 (has links)
Circadian clocks are internal timekeepers that allow organisms to anticipate and exploit predictable daily changes in their environment, aiding survival. Clock-driven rhythms, such as asexual spore development (conidiation) in Neurospora crassa, show temperature compensated periodicity that persists in constant conditions and can be reset by environmental time cues. This ability of circadian clocks to maintain a constant period and phase of behaviour over a range of temperatures is important, and whilst much of the machinery making up the circadian clock is known, the mechanism that underpins temperature compensation is not well understood. Further, it is unknown how the clock can control conidiation in the face of changing temperatures. To investigate possible mechanisms underlying temperature compensation, I first explored how compensation may arise within the central clock machinery using a comprehensive dynamic model of the Neurospora crassa circadian clock. This clock incorporates key components of the clock, and I introduced known temperature-sensitive component changes based on experimental observations. This analysis indicated that temperature-dependent changes in the binding of CK-1a to the FRQ-FRH complex may be pivotal in the temperature compensation mechanism. Previous work has highlighted the importance of the blue-light photoreceptor VIVID (VVD), as VVD knockout strains show a temperature-dependent delay in the phase of peak conidiation. Next I explored this potential role using a theoretical output model. By incorporating regulation of this pathway by VVD, I found that VVD may contribute to phase control by increasing expression of genes or proteins that peak early on in the output pathway. RNA-Seq experiments were carried out to assess the contribution of VVD to the overall transcriptomic profile of Neurospora. The analysis highlighted several key genes through which VVD may regulate the conidiation pathway, including the clock-controlled genes eas and ccg-9, which both show temperature- and strain-dependent changes in expression patterns over the time course of conidiation. In conclusion, VVD may indeed have an important role in the temperature compensation of output pathways, though further work is needed to assess the specific
contributions of genes highlighted by my RNA-Seq analysis to the compensatory mechanism.
|
37 |
Modelagem computacional de estruturas anatômicas em 3D e simulação de suas imagens radiográficas / Computational 3D modelling of anatomic structures and simulation of its radiography imagesClayton Eduardo dos Santos 01 September 2008 (has links)
Os métodos de controle de qualidade tradicionais aplicados ao radiodiagnóstico, é a melhor maneira de garantir a boa qualidade das imagens produzidas. No entanto, a investigação de particularidades oriundas do processo de formação de imagens radiológicas requer ferramentas computacionais complementares, em função do número de variáveis envolvidas. Entretanto, os fantomas computacionais baseados em voxels não conseguem representar as variações morfométricas necessárias para a simulação de exames cujo diagnóstico é baseado em imagem. Neste trabalho foi desenvolvido um novo tipo de fantoma computacional, baseado em modelagem 3D, que possui as vantagens apresentadas pelos fantomas computacionais tradicionais sem os problemas encontados nestes. A ferramenta de modelagem utilizada, o Blender, é disponibilizada gratuitamente na internet. A técnica utilizada foi a box modeling, que consiste na deformação de uma primitiva básica, nesse caso um cubo, até que apresente a forma da estrutura que se deseja modelar. Para tanto, foram utilizadas como referencia, imagens obtidas de atlas de anatomia e fotografias de um esqueleto fornecido pela Universidade de Mogi das Cruzes. Foram modelados o sistema ósseo, os órgãos internos e a anatomia externa do corpo humano. A metodologia empregada permitiu a alteração de parâmetros do modelo dentro da ferramenta da modelagem. Essa possibilidade foi mostrada através da variação, dos formatos do intestino e do aumento da quantidade de tecido adiposo da malha referente a pele. A simulação das imagens radiológicas foi realizada a partir de coeficientes de atenuação de massa de materiais, ossos e tecidos e de modelos com diversas características físicas. Essa versatilidade permite prever a influência que as diferenças morfométricas entre os indivíduos provocam nas imagens, propriciando dessa forma, uma ferramenta relevante complementar aos métodos de controle de qualidade tradicionais. / The conventional methods of quality control applied to radio diagnosis are the best way to have assured good quality of the produced images. Due the amount of variables to consider, the study of particular issues of the process of formation of radiological images requires complementary computational tools. However, the computational voxel based phantoms are not suitable to represent the morphometrical variations, intended for test simulations with image based diagnosis. This work developed a new type of computational phantom, based on 3D modelling. It has the same advantages of the conventional ones, without some of their restrictions. The modeling tool employed, Blender, is available on internet for free download. The project uses the technique called box modeling, which consists in the deformation of a primitive form (a cube, in this case) until it presents a similar form to that it is wanted to model. In order to achieve it, some images, obtained from anatomy atlas and a skeleton pictures obtained from University of Mogi das Cruzes, were used as reference. Were built models from skeletal system, internal organs and external human body anatomy. The applied methodology allowed model´s parameter settings on the modelling tool. This option was presented by means of intestine format variation and increase of adipose tissue on the mesh that represents skin. The simulation of radiological images was done by means of x-ray mass attenuation coefficients, bones and tissues and models with diferent physical characteristics. This flexibility allows the analysis and forecasting of the influences that morphometrical differences of individual implies on images, revealing an important tool that complements the conventional quality control tools.
|
38 |
Process algebra with layers : a language for multi-scale integration modellingScott, Erin G. January 2016 (has links)
Multi-scale modelling and analysis is becoming increasingly important and relevant. Analysis of the emergent properties from the interactions between scales of multi-scale systems is important to aid in solutions. There is no universally adopted theoretical/computational framework or language for the construction of multi-scale models. Most modelling approaches are specific to the problem that they are addressing and use a hybrid combination of modelling languages to model specific scales. This thesis addresses if process algebra can offer a unique opportunity in the definition and analysis of multi-scale models. In this thesis the generic Process Algebra with Layers (PAL) is defined: a language for multi-scale integration modelling. This work highlights the potential of process algebra to model multi-scale systems. PAL was designed based on features and challenges found from modelling a multi-scale system in an existing process algebra. The unique features of PAL are the layers: Population and Organism. The novel language modularises the spatial scales of the system into layers, therefore, modularising the detail of each scale. An Organism can represent a molecule, organelle, cell, tissue, organ or any organism. An Organism is described by internal species. An internal species, dependent on the scale of the Organism, can also represent a molecule, organelle, cell, tissue, organ or any organism. Populations hold specific types of Organism, for example, life stages, cell phases, infectious states and many more. The Population and Organism layers are integrated through mirrored actions. This novel language allows the clear definition of scales and interactions within and between these scales in one model. PAL can be applied to define a variety of multi-scale systems. PAL has been applied to two unrelated multi-scale system case studies to highlight the advantages of the generic novel language. Firstly the effects of ocean acidification on the life stages of the Pacific oyster. Secondly the effects of DNA damage from cancer treatment on the length of a cell cycle and cell population growth.
|
39 |
Systems biology informatics for the development and use of genome-scale metabolic modelsSwainston, Neil January 2012 (has links)
Systems biology attempts to understand biological systems through the generation of predictive models that allow the behaviour of the system to be simulated in silico. Metabolic systems biology has in recent years focused upon the reconstruction and constraint-based analysis of genome-scale metabolic networks, which provide computational and mathematical representations of the known metabolic capabilities of a given organism. This thesis initially concerns itself with the development of such metabolic networks, first considering the community-driven development of consensus networks of the metabolic functions of Saccharomyces cerevisiae. This is followed by a consideration of automated approaches to network reconstruction that can be applied to facilitate what has, until recently, been an arduous manual process. The use of such large-scale networks in the generation of dynamic kinetic models is then considered. The development of such models is dependent upon the availability of experimentally determined parameters, from omics approaches such as transcriptomics, proteomics and metabolomics, and from kinetic assays. A discussion of the challenges faced with developing informatics infrastructure to support the acquisition, analysis and dissemination of quantitative proteomics and enzyme kinetics data follows, along with the introduction of novel software approaches to address these issues. The requirement for integrating experimental data with kinetic models is considered, along with approaches to construct, parameterise and simulate kinetic models from the network reconstructions and experimental data discussed previously. Finally, future requirements for metabolic systems biology informatics are considered, in the context of experimental data management, modelling infrastructure, and data integration required to bridge the gap between experimental and modelling approaches.
|
40 |
Deriving the internal bony structure of the cochlea from high-resolution µCT images for translation to low resolution image-based construction of person-specific computational models of cochlear implantsHuman-Baron, Rene January 2019 (has links)
To investigate cochlear implant (CI) performance, geometric computational models of the cochlea have been used to assess and optimise electrode insertion strategies and to investigate current flow through the cochlear volume as a result of intra-cochlear stimulation. Most of these models are derived low-resolution computed tomography (CT) and radiographic scans of humans or high-resolution histological sections of cochleae that are not viable for in vivo studies. Often these models lack a significant set of detail, still use a generic shape of the inner structures of the cochlea or obscured structures and are not clinically translatable. A method for the predication of obscured landmarks from reference landmarks is needed to generate user-specific computational models of the cochlea if the data source is of low quality. A standard set of prediction polynomial functions derived from high-resolution μCT scans needs to be developed and applied to clinically available CT images of the cochlea. Although histological sections of the human cochlea provide the best
resolution of the cochlear structures, midmodialar sequential sectioning of the cochlea is not possible. μCT scans provide a solution, as the images are still of high quality and allow for detailed measurement of cochlear parameters on midmodiolar sections. Secondly, the more recent construction of a knowledge-based automated landmark computational model needs to be refined. The search fields that the automated models template uses to place a landmark need to be standardised and should have the ability to morph the cochlear shape together with the inner bony structures. Such models are of great clinical importance, as they can be generated much more quickly to inform CI surgeons on the individual cochlear anatomy of a CI patient and maintenance of CI.
Lastly, the effect that taxonomic class has on the functional implications of an implanted electrode array has yet to be determined. The cochlear geometry that best predicts the location of the electrode array is important, as it has a significant implication for hearing outcomes.
This thesis assesses the anatomical geometric factors that affect inter-person variations at the peripheral-electrode interface by developing a pre-operative approach to person-specific model design for implant candidates. This approach aims to increase the accuracy and details of geometric parameters that are available for model construction and integrate the image data into three-dimensional (3D) computational volume conduction models. The study used a landmark-based approach to measure the cochlear parameters that contribute to cochlear variation, as well as the development of algorithms to derive obscured landmarks from consistently available cochlear landmarks. A workflow in the form of a custom script UPCochlea.m that describes the technical aspects of landmark analysis was created to describe each cochlea algorithmically and to extract spiral trajectories that describe cochlear anatomy. Polynomial algorithms for the description of each spiral were created for use as standard for determining each cochlear class and the prediction of obscured spirals on clinically available data. This is the first study of its kind to describe all eight spirals that constitute the cochlea and spiral lamina.
Automatic generation of user-specific landmark-based 3D computational models is a rapid process that can easily be translated into a clinical tool that may inform surgeons, manufacturers of CI’s and bio-engineers on the maintenance of such models. By refining the search fields for the template that landmark-based automated cochlear computational models
search for a landmark to be placed, more accurate automated computational models could be generated.
Psychometric data from CI users are correlated with the anatomical dimensions, their taxonomic classification and electrode locations derived from postoperative patient scans to determine the factors, if any, that may affect electrode array locations and thus the functional outcomes of CI users. The factors that contribute to speech and hearing outcomes may be used to optimise the parameter settings for CI user device programming / Thesis (PhD (Biosystems))--University of Pretoria, 2019. / Electrical, Electronic and Computer Engineering / PhD (Biosystems) / Restricted
|
Page generated in 0.1023 seconds