81 |
Impedanzspektroskopie an Zellschichten: Bayes’sche Datenanalyse und Modellvergleiche von experimentellen Messungen zur Charakterisierung der endothelialen SchrankenfunktionZimmermann, Franziska 09 July 2024 (has links)
Das Endothel stellt die entscheidende Barriere zwischen dem Intra- und Extravasalraum dar und ist damit essentiell für physiologische und pathophysiologische Vorgänge wie die Selektivität der Blut-Hirn-Schranke oder die Bildung von Ödemen. Die maßgebliche Kenngröße zur Beschreibung der Durchlässigkeit dieser Barriere ist die Permeabilität. Eine Methode zur Quantifizierung der endothelialen Schrankenfunktion ist die Impedanzspektroskopie. Hierbei wird an einem isolierten Zellmonolayer, der auf einen Filter kultiviert wurde, eine Wechselspannung angelegt und der resultierende komplexe Widerstand – die Impedanz Z(f) – für verschiedene Messfrequenzen f bestimmt. Kenngrößen sind der frequenzabhängige Verlauf von Betrag |Z| und Phase Ph der komplexen Impedanz. Zur Charakterisierung dieser Zellschicht wird oft der Transendotheliale Widerstand TER verwendet. Die Bestimmung dieses Wertes erfolgt über das Aufstellen von physikalischen Ersatzschaltbildern als mathematisches Modell, das den Messaufbau möglichst gut beschreibt und anschließender Optimierung der Modellparameter, sodass ein bestmöglicher Fit zwischen den Modellwerten und den Messdaten resultiert. Kommerziell verfügbare Impedanzmessgeräte ermöglichen die experimentell einfache Bestimmung von |Z| und Ph sowie oft eine direkte Analyse dieser Messwerte mittels voreingestellter mathematischer Modelle. Diese sind allerdings nicht erweiterbar oder austauschbar. Die resultierenden Parameter werden mitunter ohne Fehler angegeben, dieser soll dann über Mehrfachmessungen bestimmt werden. Zudem ist kein quantitativer Modellvergleich möglich. Diese Arbeit hatte daher das Ziel, die Vorteile der praktischen experimentellen Anwendung eines kommerziellen Messgerätes zu nutzen, aber die Analyse der Messdaten in den genannten Punkten zu erweitern und zu verbessern. Hierfür wurden beruhend auf der Bayes’schen Datenanalyse zwei verschiedene Algorithmen in python zur Auswertung impedanzspektroskopischer Daten implementiert. Bei der Bayes’schen Datenanalyse handelt es sich um einen logisch konsistenten Weg der Parameterbestimmung für eine bestimmte Modellannahme unter Berücksichtigung der gegebenen Daten und deren Fehler. Es resultieren als Ergebnis die Posterior als Wahrscheinlichkeitsverteilungen der Parameter. Aus diesen können Mittelwerte und Standardabweichungen berechnet werden. Vorwissen über die Modellparameter kann in Prioren für die Auswertung einbezogen werden. Bei der Analyse können im Gegensatz zu den kommerziellen Geräten sowohl |Z| als auch Ph berücksichtigt werden. Außerdem ermöglicht die Bestimmung der Evidenz einen quantitativen Modellvergleich, sodass valide Aussagen getroffen werden können, welches der untersuchten Modelle die gemessenen Daten am besten beschreibt. Die experimentellen Messungen wurden mit dem CellZscope (nanoAnalytics, Münster) durchgeführt. Dieses ermöglicht Messungen in einem Frequenzbereich von 1 - 100 kHz von auf Filtern kultivierten Zellmonolayern in sogenannten Wells auch unter Zugabe von Stimulatoren. Bei den zwei implementierten Algorithmen handelt es sich um das Markov-Chain-Monte-Carlo (MCMC) - Verfahren und um das MultiNested (MN) - Sampling. Das MCMC-Verfahren wurde vollständig selbstständig umgesetzt, der MN-Algorithmus beruht auf einer frei zugänglichen Bibliothek und wurde entsprechend angepasst. Diese zwei Algorithmen wurden zunächst ausführlich mit selbstgenerierten Testdaten validiert. Beide lieferten nahezu identische Ergebnisse der Parameterschätzung in hervorragender Übereinstimmung mit den gewählten Simulationsparametern. Die Einbeziehung der Ph verbesserte diese weiter und führte zusätzlich zu geringeren Parameterunsicherheiten. Als weitere Verbesserung wurde außerdem ein Verfahren zum Abschätzen der Messungenauigkeiten und deren Korrelationen aus den Messdaten etabliert, da diese bei dem CellZscope nicht angegeben werden. Für den MN-Algorithmus konnte der Jeffreys-Prior implementiert werden. Dies ermöglichte auch die korrekte Analyse mathematisch komplexer, höherparametriger Modelle.
Nach dieser ausführlichen Validierung der Algorithmen hinsichtlich der Parameterschätzung, der Bestimmung der Messunsicherheiten und des Modellvergleiches erfolgte die Anwendung auf experimentelle Messungen. Bei diesen wurden Endothelzellen der menschlichen Nabelschnurvene (HUVEC) mit Thrombin in verschiedenen Konzentrationen und Triton X-100 stimuliert. Thrombin führt bei Endothelzellen u.a. zu Zellretraktion und Ausbildung ontrazellulärer Gaps mit Steigerung der Makromolekülpermeabilität. Triton X-100, ein nicht-ionisches Detergenz, zerstört die Lipidschicht der Zellmembranen. Vergleichend wurden zu den Messungen an Wells mit Medium und zellbedecktem Filter auch Messungen an Wells nur mit Medium sowie mit Medium und zellfreiem Filter durchgeführt. Es wurden basierend auf einer ausführlichen Literaturrecherche verschiedene Modelle untersucht, deren gemeinsames Merkmal ein konstantes Phasenelement (CPE) für die Elektroden-Elektrolyt-Grenzfläche und eine Reihenschaltung mehrerer RC-Glieder (Parallelschaltung eines Widerstandes R und eines Kondensators C) ist. Als das geeignetste Modell resultierte für alle diese drei Konditionen eine Reihenschaltung eines CPE mit mehreren RC-Gliedern, aber ohne weitere Elemente. Für die Wells mit nur Medium und die mit zellbedecktem Filter wurde die Anzahl nRC der RC-Glieder mit jeweils nRC = 4, für die Wells mit zellfreiem Filter mit nRC = 3 bestimmt. Dies zeigte, dass die in der Literatur oft verwendete Beschreibung des Mediums mit einem ohmschen Widerstand unzureichend ist. Auffallend war, dass bei allen Konditionen ein RC-Glied eine sehr niedrige Kapazität in der Größenordnung einiger nF aufwies. Für den Filter resultierte ein in der Kapazität charakteristisches RC-Glied, dennoch ergaben sich insgesamt weniger RC-Glieder als für die Messungen nur mit Medium. Bei den Messungen mit zellbedecktem Filter konnte der aus der Literatur bekannte konzentrationsabhängige Anstieg des TER dargestellt werden. Der Modellvergleich legte nahe, dass die Integrität des Zellmonolayers bei hohen Thrombinkonzentrationen nicht vollständig zerstört wird, da weiterhin ein Modell, das auch ein zellspezifisches RC-Glied enthält, die höchste Modellwahrscheinlichkeit aufwies. Für die Stimulation mit Triton X-100 ergab sich dagegen eine höhere Wahrscheinlichkeit für ein Modell, das einen Filter ohne Zellschicht beschreibt. Somit scheint hier der Monolayer nicht mehr intakt zu sein. Zur genaueren Untersuchung der biophysikalischen Bedeutung einzelner Modellparameter wurden außerdem erste zellfreie Messungen an Wells mit deionisiertem Wasser und Natriumchloridlösungen unterschiedlicher Konzentration durchgeführt und mit Berechnungen aus der Literatur verglichen. Hierbei konnte die bei allen Wells bestimmte sehr niedrige Kapazität auf den Aufbau des Wells zurückgeführt werden. Für die anderen RC-Glieder des Mediums und des Filters ergaben sich Hinweise, dass diese auf der Ausbildung dünner ionischer Schichten beruhen. Zusammenfassend haben sich die implementierten Algorithmen der Bayes’schen Datenanalyse als hervorragend geeignet für die Auswertung impedanzspektroskopischer Messungen dargestellt. Durch die ausführliche Validierung mit Testdaten liegt ein hohes Maß an Reliabilität der Ergebnisse vor. Die Parameterschätzung der Messung an Zellmonolayern gelang auch für sehr komplexe Modelle und über die Evidenz kann ein quantitativer Modellvergleich erfolgen. Die Modellparameter wurden auch hinsichtlich ihrer biophysikalischen Bedeutung untersucht. Die Technik ist somit der direkten Analyse des CellZscope vorzuziehen und kann auch zur verbesserten Planung zukünftiger Experimente (experimental design) verwendet werden. Außerdem ist eine Anwendung neuer Modelle, beispielsweise zur Quantifizierung der Fläche des noch intakten Monolayers bei starker Thrombinstimulation oder die Beschreibung komplexerer Setups wie Co-Kulturen verschiedener Zellarten möglich.
|
82 |
A management model for the recognition of prior learning (RPL) at the university of South AfricaJanakk, Lisa 11 1900 (has links)
This study explored the implementation of the recognition of prior learning (RPL)
at Unisa by investigating the strengths and weaknesses of the RPL methodology,
instruments and processes when taking students through the RPL process. The
successes and challenges experienced by the RPL academic advisors and the
academic assessors were determined and guidelines provided for the effective
implementation of RPL at Unisa. The empirical research design was exploratory
within a qualitative framework employing participant observation, focus group
interviewing, individual interviewing and the distribution of questionnaires that
consisted of open-ended questions. The research sample comprised 26
purposefully selected participants. With regard to the research findings, the
challenges include a lack of administrative support, a lack of support from top
management and the academic staff, and a lack of communication between
management and the RPL department. The strength of the RPL department lay
in its well-documented process manual. / Teacher Education / M. Ed. (Education Management)
|
83 |
Adaptive Shrinkage in Bayesian Vector Autoregressive ModelsFeldkircher, Martin, Huber, Florian 03 1900 (has links) (PDF)
Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this paper we derive the shrinkage prior of Griffin et al. (2010) for the VAR case and its relevant conditional posterior distributions. This
framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariances of the VAR along with Gamma priors on a set of local and global prior scaling parameters. This prior setup is then generalized by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. A simulation exercise shows that the proposed framework yields more precise estimates of the model parameters and impulse response functions. In addition, a forecasting exercise applied to US data shows that the proposed prior outperforms other specifications in terms of point and density predictions. (authors' abstract) / Series: Department of Economics Working Paper Series
|
84 |
The Priory of Durham Priory in the time of John Wessington, Prior 1416-1446Dobson, Richard Barrie January 1963 (has links)
No description available.
|
85 |
Graad 12-punte as voorspeller van sukses in wiskunde by 'n universiteit van tegnologie / I.D. MulderMulder, Isabella Dorothea January 2011 (has links)
Problems with students’ performance in Mathematics at tertiary level are common in South Africa − as it is worldwide. Pass rates at the university of technology where the researcher is a lecturer, are only about 50%. At many universities it has become common practice to refer students who do not have a reasonable chance to succeed at university level, for additional support to try to rectify this situation. However, the question is which students need such support? Because the Grade 12 marks are often not perceived as dependable, it has become common practice at universities to re-test students by way of an entrance exam or the "National Benchmark Test"- project. The question arises whether such re-testing is necessary, since it costs time and money and practical issues make it difficult to complete timeously. Many factors have an influence on performance in Mathematics. School-level factors include articulation of the curriculum at different levels, insufficiently qualified teachers, not enough teaching time and language problems. However, these factors also affect performance in most other subjects, but it is Mathematics and other subjects based on Mathematics that are generally more problematic. Therefore this study focused on the unique properties of the subject Mathematics. The determining role of prior knowledge, the step-by-step development of mathematical thinking, and conative factors such as motivation and perseverance were explored. Based on the belief that these factors would already have been reflected sufficiently in the Grade 12 marks, the correlation between the marks for Mathematics in Grade 12 and the Mathematics marks at tertiary level was investigated to assess whether it was strong enough for the marks in Grade 12 Mathematics to be used as a reliable predictor of success or failure at university level. It was found that the correlation between the marks for Mathematics Grade 12 and Mathematics I especially, was strong (r = 0,61). The Mathematics marks for Grade 12 and those for Mathematics II produced a correlation coefficient of rs = 0,52. It also became apparent that failure in particular could be predicted fairly accurately on the basis of the Grade 12 marks for Mathematics. No student with a Grade 12 Mathematics mark below 60% succeeded in completing Mathematics I and II in the prescribed two semesters, and only about 11% successfully completed it after one repetition. The conclusion was that the reliability of the prediction based on the marks for Grade 12 Mathematics was sufficient to refer students with a mark of less than 60% to receive some form of additional support. / MEd, Learning and Teaching, North-West University, Vaal Triangle Campus, 2011
|
86 |
Audiovisual Prior Entry: Evidence from the Synchrony Comparison Judgment TaskCapstick, Gary 26 July 2012 (has links)
Prior entry refers to the notion that attended stimuli are perceived sooner than unattended stimuli due to a speed up in sensory processing. The century long debate regarding the prior entry phenomenon’s existence has always been grounded in the degree to which the methods applied to the problem allow for cognitive response bias. This thesis continues that trend by applying the synchrony comparison judgment method to the problem of audiovisual prior entry. Experiment 1 put this method into context with two other common psychophysical methods – the temporal order judgment and the synchrony judgment – that have been applied to the prior entry problem. The results of this experiment indicated that the temporal order judgment method was out of step with the other two methods in terms of the parameter estimates typically used to evaluate prior entry. Experiment 2 evaluated and confirmed that a specific response bias helps explain the difference in parameter estimates between the temporal order judgment method and the other two. Experiment 3 evaluated the precision of the synchrony comparison judgment method. The results indicated that the method was precise enough to detect potentially small prior entry effect sizes, and that it afforded the ability to detect those participants with points of subjective synchrony that deviate substantially from zero. Finally, Experiment 4 applied the synchrony comparison judgment method to a prior entry scenario. A prior entry effect was not realized. Overall, this thesis highlights the drawbacks of all previous methods used to evaluate audiovisual perception, including prior entry, and validates the use of the synchrony comparison judgment. Further, due to the resistance of this method to response bias, this result now stands as the most convincing evidence yet against the prior entry phenomenon.
|
87 |
Bayesian Structural PhylogeneticsChallis, Christopher January 2013 (has links)
<p>This thesis concerns the use of protein structure to improve phylogenetic inference. There has been growing interest in phylogenetics as the number of available DNA and protein sequences continues to grow rapidly and demand from other scientific fields increases. It is now well understood that phylogenies should be inferred jointly with alignment through use of stochastic evolutionary models. It has not been possible, however, to incorporate protein structure in this framework. Protein structure is more strongly conserved than sequence over long distances, so an important source of information, particularly for alignment, has been left out of analyses.</p><p>I present a stochastic process model for the joint evolution of protein primary and tertiary structure, suitable for use in alignment and estimation of phylogeny. Indels arise from a classic Links model and mutations follow a standard substitution matrix, while backbone atoms diffuse in three-dimensional space according to an Ornstein-Uhlenbeck process. The model allows for simultaneous estimation of evolutionary distances, indel rates, structural drift rates, and alignments, while fully accounting for uncertainty. The inclusion of structural information enables pairwise evolutionary distance estimation on time scales not previously attainable with sequence evolution models. Ideally inference should not be performed in a pairwise fashion between proteins, but in a fully Bayesian setting simultaneously estimating the phylogenetic tree, alignment, and model parameters. I extend the initial pairwise model to this framework and explore model variants which improve agreement between sequence and structure information. The model also allows for estimation of heterogeneous rates of structural evolution throughout the tree, identifying groups of proteins structurally evolving at different speeds. In order to explore the posterior over topologies by Markov chain Monte Carlo sampling, I also introduce novel topology + alignment proposals which greatly improve mixing of the underlying Markov chain. I show that the inclusion of structural information reduces both alignment and topology uncertainty. The software is available as plugin to the package StatAlign. </p><p>Finally, I also examine limits on statistical inference of phylogeny through sequence information models. These limits arise due to the `cutoff phenomenon,' a term from probability which describes processes which remain far from their equilibrium distribution for some period of time before swiftly transitioning to stationarity. Evolutionary sequence models all exhibit a cutoff; I show how to find the cutoff for specific models and sequences and relate the cutoff explicitly to increased uncertainty in inference of evolutionary distances. I give theoretical results for symmetric models, and demonstrate with simulations that these results apply to more realistic and widespread models as well. This analysis also highlights several drawbacks to common default priors for phylogenetic analysis, I and suggest a more useful class of priors.</p> / Dissertation
|
88 |
Integrating multiple individual differences in web-based instructionAlhajri, Rana Ali January 2014 (has links)
There has been an increasing focus on web-based instruction (WBI) systems which accommodate individual differences in educational environments. Many of those studies have focused on the investigation of learners’ behaviour to understand their preferences, performance and perception using hypermedia systems. In this thesis, existing studies focus extensively on performance measurement attributes such as time spent using the system by a user, gained score and number of pages visited in the system. However, there is a dearth of studies which explore the relationship between such attributes in measuring performance level. Statistical analysis and data mining techniques were used in this study. We built a WBI program based on existing designs which accommodated learner’s preferences. We evaluated the proposed system by comparing its results with related studies. Then, we investigated the impact of related individual differences on learners’ preferences, performance and perception after interacting with our WBI program. We found that some individual differences and their combination had an impact on learners' preferences when choosing navigation tools. Consequently, it was clear that the related individual differences altered a learner’s preferences. Thus, we did further investigation to understand how multiple individual differences (Multi-ID) could affect learners’ preferences, performance and perception. We found that the Multi-ID clearly altered the learner’s preferences and performance. Thus, designers of WBI applications need to consider the combination of individual differences rather than these differences individually. Our findings also showed that attributes relationships had an impact on measuring learners’ performance level on learners with Multi-ID. The key contribution of this study lies in the following three aspects: firstly, investigating the impact of our proposed system, using three system features in the design, on a learner’s behavior, secondly, exploring the influence of Multi-ID on a learner’s preferences, performance and perception, thirdly, combining the three measurement attributes to understand the performance level using these measuring attributes.
|
89 |
Priors for new view synthesisWoodford, Oliver J. January 2009 (has links)
New view synthesis (NVS) is the problem of generating a novel image of a scene, given a set of calibrated input images of the scene, i.e. their viewpoints, and also that of the output image, are known. The problem is generally ill-posed---a large number of scenes can generate a given set of images, therefore there may be many equally likely (given the input data) output views. Some of these views will look less natural to a human observer than others, so prior knowledge of natural scenes is required to ensure that the result is visually plausible. The aim of this thesis is to compare and improve upon the various Markov random field} and conditional random field prior models, and their associated maximum a posteriori optimization frameworks, that are currently the state of the art for NVS and stereo (itself a means to NVS). A hierarchical example-based image prior is introduced which, when combined with a multi-resolution framework, accelerates inference by an order of magnitude, whilst also improving the quality of rendering. A parametric image prior is tested using a number of novel discrete optimization algorithms. This general prior is found to be less well suited to the NVS problem than sequence-specific priors, generating two forms of undesirable artifact, which are discussed. A novel pairwise clique image prior is developed, allowing inference using powerful optimizers. The prior is shown to perform better than a range of other pairwise image priors, distinguishing as it does between natural and artificial texture discontinuities. A dense stereo algorithm with geometrical occlusion model is converted to the task of NVS. In doing so, a number of challenges are novelly addressed; in particular, the new pairwise image prior is employed to align depth discontinuities with genuine texture edges in the output image. The resulting joint prior over smoothness and texture is shown to produce cutting edge rendering performance. Finally, a powerful new inference framework for stereo that allows the tractable optimization of second order smoothness priors is introduced. The second order priors are shown to improve reconstruction over first order priors in a number of situations.
|
90 |
A Case Study of the Self-directed Learning of Women Entrepreneurs in the First Four Years of Business OwnershipCarwile, Julie 17 April 2009 (has links)
In this qualitative case study, self-directed learning theory was used as the lens to explore experiences of nine women entrepreneurs during the first four years of business ownership as they sought to acquire skills necessary to run their businesses. Data were collected over six months through in-person 90-minute interviews and follow-up questions posed by telephone and email. Qualitative data software was used for coding and thematic analysis, resulting in five broad conclusions related to learning, with additional unanticipated findings. Study participants engaged in a variety of self-directed learning activities, mostly through trial and error experimentation, and possessed varying motivations for learning. Educational level and reliance on past industry experience limited openness to new experiences and commitment to learning for some, particularly those with high school degrees or limited college experience. The majority of learning was pursued “just-in-time” as the need mandated when a challenge presented itself rather than in a pre-planned manner. Learning was heavily reliant on other people: most sought the advice of paid professionals, former co-workers, or friends and family. The use of a mentor for learning was identified by one participant, while three employed business coaches for professional guidance. Much of their learning was highly instrumental in nature, focused on here-and-now problem solving related to managing employees, handling legal issues in establishing the business, and learning to market themselves. While extremely self-confident in their abilities, most of the women struggled with issues of family and work-life balance, and several described guilt over neglecting one aspect of their lives for the other. Study conclusions emphasize the importance of knowing how to learn in the entrepreneurial context and suggest ways entrepreneurs can access knowledge and new experiences for learning, with implications for entrepreneurship programs, government agencies, and educators.
|
Page generated in 0.0423 seconds