741 |
Strategies to increase the signal to noise ratio in three-dimensional positron emission tomographyMiller, Matthew P. January 2000 (has links)
Positron Emission Tomography (PET) is an imaging technique that uses biologically relevant molecules labelled with positron emitting radioisotopes to measure regional tissue function in living organisms. To maximise the detection efficiency, data are acquired in 3D, that is, all possible detector combinations in a scanner without inter-ring shielding (septa). The gain in sensitivity afforded by 3D PET is offset by the increase in random coincidences, scattered coincidences and deadtime. These problems must be overcome for the gain in sensitivity to be fully realised. The aim of this research project was to investigate strategies to increase the signal to noise ratio of the 3D PET data. Additional side shielding, both in neuro and body scanning, has been implemented and assessed. Large gains were achieved using the neuro shields in experimental and clinical studies. The potential of the body shields was tested in experimental and in-vivo studies which showed that they were scan dependent. For example, no gain was found for a cardiac blood flow (H2 IS0) study. A model-based scatter correction was assessed by companng compartment ratios within the 'Utah' phantom with radioactivity outside the field of view, with and without neuroshielding. Recovered ratios were within 6% of their actual values. The integration time was reduced in an effort to decrease the system deadtime. A peak increase of 150/0 in noise equivalent count rate was measured for a uniform cylinder inside the field of view. A random coincidence variance reduction technique was implemented and assessed to reduce the noise contained in the delayed window random coincidence estimate. The algorithm was evaluated using phantoms and tested on clinical data. A mean 16% reduction in coefficient of variation was measured for a C15O torso study.
|
742 |
Evaluation of the specificity of a commercial ELISA for detection of antibodies against porcine respiratory and reproductive syndrome virus in individual oral fluid of pigs collected in two different waysSattler, Tatjana, Wodak, Eveline, Schmoll, Friedrich 19 March 2015 (has links) (PDF)
Background: The monitoring of infectious diseases like the porcine reproductive and respiratory syndrome (PRRS) using pen-wise oral fluid samples becomes more and more established. The collection of individual oral fluid, which would be useful in the monitoring of PRRSV negative boar studs, is rather difficult. The aim of the study was to test
two methods for individual oral fluid collection from pigs and to evaluate the specificity of a commercial ELISA for detection of PRRSV antibodies in these sample matrices. For this reason, 334 serum samples from PRRSV negative pigs (group 1) and 71 serum samples from PRRSV positive pigs (group 2) were tested for PRRSV antibodies with a
commercial ELISA. Individual oral fluid was collected with a cotton gauze swab from 311 pigs from group 1 and 39 pigs from group 2. Furthermore, 312 oral fluid samples from group 1 and 67 oral fluid samples from group 2 were taken with a self-drying foam swab (GenoTube). The recollected oral fluid was then analysed twice with a commercial ELISA for detection of PRRSV antibodies in oral fluid.
|
743 |
NEURAL CONTROL OF CARDIOVASCULAR FUNCTION FOLLOWING SPINAL CORD INJURY IN HUMANSAslan, Sevda Coban 01 January 2006 (has links)
Maintenance of stable arterial blood pressure during orthostatic challenges is a major problem after spinal cord injury (SCI). Since early participation in rehabilitation is critically important in reducing long term morbidity, recovering the ability to regulate blood pressure during therapy is essential for individuals with SCI. The objective of our study was to investigate short term cardiovascular function of able-bodied (AB), paraplegic (PARA) and tetraplegic (TETRA) subjects in response to head up tilt (HUT) as an early indicator of autonomic damage that might forewarn of future orthostatic regulatory problems. We acquired cardiovascular variables from able-bodied (AB; n=11), paraplegic (PARA; n=5) and tetraplegic (TETRA; n=5) subjects in response to HUT. The SCI patients in both groups were in their first two months post injury. Data were recorded at rest and during 7 min each at 20??, 40??, 60?? and 80?? HUT. Techniques used to estimate regulatory capability and reflex activity included: Mean values and spectral power of heart rate (HR) and arterial blood pressure (BP), baroreflex sequence measurements and cross correlation between HR and systolic blood pressure (SBP). An index of baroreflex sensitivity (BRS), baroreflex effectiveness index (BEI), and the percentage occurrence of systolic blood pressure (BP) ramps and baroreflex sequences were calculated from baroreflex sequence measurements. The spectral power of HR and BP, the cross correlation of systolic BP and heart rate (HR) were examined in low frequency (LF: 0.04-0.15 Hz) and high frequency (HF: 0.15-0.4 Hz) ranges. The BRS index was significantly (p andlt; 0.05) decreased from supine to 80o HUT in AB and TETRA. This index in PARA was the lowest at each tilt position in the three groups, and decreased with tilt. The percentage of heart beats involved in systolic BP ramps and in baroreflex sequences significantly (pandlt;0.05) rose from supine to 80o HUT in AB, was relatively unchanged in PARA and declined in TETRA. Both of these indexes were significantly (pandlt;0.05) lowerin the SCI than in the AB group at each tilt level. The BEI values were greatest in AB, and declined with tilt in all groups. Spinal cord injured patients had less power of BP and HR fluctuations than AB in both LF and HF regions. The LF spectral power of BP and HR increased with tilt in AB, remained unchanged in PARA and decreased in TETRA. The HF spectral power of HR decreased in all three groups. The peak HR / BP cross correlation in the LF region was greatest in AB, and significantly (pandlt;0.05) increased during HUT in AB, remained fairly constant in PARA, and declined in TETRA. The peak cross correlation in the HF region significantly (pandlt;0.05) decreased with tilt in all groups, and the SCI group had lower values than AB at each tilt level. We conclude that both PARA and TETRA had a smaller percentage of SBP ramps, BRS, and lower BEI than AB, likely indicating decreased stimulation of arterial baroreceptors, and less engagement of feedback control. The mixed sympathetic, parasympathetic innervations of paraplegics, or their elevated HR, may contribute to their significantly lower BRS. Our data indicate that the pathways utilized to evoke baroreflex regulation of HR are compromised by SCI and this loss may be a major contributor to the decrease in orthostatic tolerance following injury.
|
744 |
AN INNOVATIVE APPROACH TO MECHANISTIC EMPIRICAL PAVEMENT DESIGNGraves, Ronnie Clark, II 01 January 2012 (has links)
The Mechanistic Empirical Pavement Design Guide (MEPDG) developed by the National Cooperative Highway Research Program (NCHRP) project 1-37A, is a very powerful tool for the design and analysis of pavements. The designer utilizes an iterative process to select design parameters and predict performance, if the performance is not acceptable they must change design parameters until an acceptable design is achieved.
The design process has more than 100 input parameters across many areas, including, climatic conditions, material properties for each layer of the pavement, and information about the truck traffic anticipated. Many of these parameters are known to have insignificant influence on the predicted performance
During the development of this procedure, input parameter sensitivity analysis varied a single input parameter while holding other parameters constant, which does not allow for the interaction between specific variables across the entire parameter space. A portion of this research identified a methodology of global sensitivity analysis of the procedure using random sampling techniques across the entire input parameter space. This analysis was used to select the most influential input parameters which could be used in a streamlined design process.
This streamlined method has been developed using Multiple Adaptive Regression Splines (MARS) to develop predictive models derived from a series of actual pavement design solutions from the design software provided by NCHRP. Two different model structures have been developed, one being a series of models which predict pavement distress (rutting, fatigue cracking, faulting and IRI), the second being a forward solution to predict a pavement thickness given a desired level of distress. These thickness prediction models could be developed for any subset of MEPDG solutions desired, such as typical designs within a given state or climatic zone. These solutions could then be modeled with the MARS process to produce am “Efficient Design Solution” of pavement thickness and performance predictions. The procedure developed has the potential to significantly improve the efficiency of pavement designers by allowing them to look at many different design scenarios prior to selecting a design for final analysis.
|
745 |
Essays on Consumption and Asset Pricing Puzzles王高文, Wang, Gao-Wen Unknown Date (has links)
This thesis contributes to the literature on the consumption-portfolio choice under uncertainty and is motivated by several empirical failures of the standard consumption-based capital asset pricing model (CCAPM). This canonical model has proven disappointing empirically and has even been questioned whether it is theoretically valuable and practically useful
even if it is in some sense the only model we have. The frustration is due to that the model performs no better in practice and generates some well-known consumption puzzles
and asset pricing puzzles. The purpose of the thesis is
to reexamine these puzzles and then to resolve them.
After the debate of Hansen and Singleton (1983) and Hall (1988),
the estimates of the elasticity of intertemporal substitution (EIS) of consumption in a representative agent model have not resulted in any consensus. Based on this observation, the first chapter of this thesis is focused on resolving the elasticity puzzle or the unresponsiveness to interest rates. We propose a new theoretical and empirical perspective on the relationship between consumption growth and asset returns. In the spirit of Hansen and Singleton (1983), we demonstrate that observed growth rate of consumption responds not only to a specific asset return but also to other asset returns. Empirically, US postwar quarterly data are used to fit the regression model derived in the chapter, and the sample period is 1953Q2-2001Q2.
Empirical results show that the EIS is greater than 0.1, the maximum value considered possible by Hall (1988). Accordingly,
we argue that there is no elasticity puzzle in the standard representative agent model.
The second chapter provides an explanation for the puzzle of excess sensitivity of consumption to expected income proposed by Flavin (1981). We exploit consumer's superior information
(i.e., windfalls in investments and in income) to integrate the consumption Euler equations into a generalized Euler equation.
The implications emerging from the equation can refute much of the empirical evidence against the permanent income hypothesis (PIH). In short, we conclude that consumption growth is sensitive to windfalls in income, but not to expected income. Thus, Friedman's prescient insight is being formally corroborated in standard utility theory. The equation also provides an alternative approach permitting one more precisely to estimate the preference parameters and much easier to identify the time-series properties of labor income. Empirical results based on U.S. postwar quarterly data show that the EIS is significantly positive and the labor income should follow a nonstationary second-order autoregressive process.
The last chapter of the thesis, chapter three, attempts to address the equity premium puzzle, proposed by Mehra and Prescott (1985), and the risk-free rate puzzle, proposed by Weil (1989). These two asset pricing puzzles have troubled financial economists for nearly two decades. To date, there is still no convincing solution for the equity premium puzzle. The CCAPM is apparently inconsistent with the data, especially the annual data in the 1889-1978 period used by Mehra and Prescott (1985). This has led many economists to question whether the model should be abandoned. The purpose of the chapter is to resolve the two puzzles, and then to consolidate the Lucas-Breeden paradigm embedded in the standard CCAPM. We demonstrate that the equity premium puzzle is resulted from the gaps between
the expected asset returns and the actual ones. These gaps have conventionally been regarded as regression disturbances, and explained as good luck or unexpected windfalls. We introduce an alternative way that, using other good luck to explain a given good luck, can help fill in the specific gap. Results of numerical calculations and parametric estimation show that, the gap has been significantly narrowed down and hence the equity premium and risk-free rate puzzles are successfully resolved.
|
746 |
Känslighets- och osäkerhetsanalys av parametrar och indata i dagvatten- och recipientmodellen StormTacStenvall, Brita January 2004 (has links)
<p>Three methods of sensitivity and unceartainty analysis have been applied to the operative stormwater- and recipient model StormTac. The study area is the watershed of lake Flaten in the municipality Salem. StormTac’s submodels for stormwater, pollutant transport and the recipient are cosidired. In the sensitivity assessment, the model parametres and inputs were varied one at a time by a constant percentage according to the “one at a time” (OAAT) method and the response of the outputs were calculated. It was found that the stormwater- and baseflow were most sensitive to perturbations in the perciptation. Unceartainty analysis using Monte Carlo simulation was performed in two different ways. (1) All model parametres and inputs were included with defined unceartainties and the resulting unceartainty for the target variable was quantified. Thereafter, whith the purpose to estimate the contribution of all the parametres and inputs, the cumulative uncertainty for the target variable, each parameters/inputs unceartainty was omitted one at the time. The most crucial uncertainty for the storm water flow was the runoff coefficient for forestland and the perciptation (i.e the differens between the 90- and 10-percentile for the storm water flow was reduced whith 44 % and 33 % respectively). (2) To identify optimal parameter intervals, the probability for an acceptable value of the target variable was plotted against each parameters value range. The result suggests that for some of the parametres i StormTac, the ranges should be changed.</p> / <p>Den operativa dagvatten- och recipientmodellen StormTac har applicerats på sjön Flatens avrinningsområde i Salems kommun. StormTac:s delmodeller för dagvatten, föroreningstransport och recipienten studerades. Tre olika metoder för att undersöka osäkerheten och känsligheten hos parametrar och indata i delmodellerna tillämpades. I känslighetsanalysen (OAAT-metoden) behäftades parametervärdena och indata med systematiska fel och responsen hos utdata beräknades. Dag- och basvattenflödet var känsligast mot fel i nederbördsdata, medan kväve-, fosfor- och kopparbelastningen till recipienten var känsligast mot respektive förorenings dagvattenkoncentration från områden med bebyggelse. Varje parameter och indatas bidrag till den kumulativa osäkerheten hos utdata uppskattades med hjälp av Montecarlosimulering. Genom att för varje effektvariabel studera differensen mellan 90- och 10-percentilen när osäkerheten hos en parameter/indata i taget utelämnades, kunde varje parameters/indatas bidrag till modellresultatets osäkerhet kvantifieras. För dagvattenflödet bidrog avrinningskoefficienten för skogmark med 44 % av osäkerheten och nederbörden med 33 %. Montecarloanlys praktiserades även för att identifiera optimala intervall för parametrarna i modellen. Sannolikheten för ett accepterat värde på den simulerade effektvariabeln plottades mot varje parameters värdemängd. För vissa parametrar indikerade resultatet att intervallen kan förändras mot hur de i nuläget ser ut i StormTac. Uniforma sannolikhetsfördelningar, begränsade av StormTac:s min- och maxvärden för parametrarna och ± 50% av orginalvärdet för indata, användes i båda osäkerhetsanalyserna.</p>
|
747 |
Evaluation and Development of the Dynamic Insulin Sensitivity and Secretion Test for Numerous Clinical ApplicationsDocherty, Paul David January 2011 (has links)
Given the high and increasing social, health and economic costs of type 2 diabetes, early diagnosis and prevention are critical. Insulin sensitivity and insulin secretion are important etiological factors of type 2 diabetes and are used to define an individual’s risk or progression to the disease state. The dynamic insulin sensitivity and secretion test (DISST) concurrently measures insulin sensitivity and insulin secretion. The protocol uses glucose and insulin boluses as stimulus, and the participant response is observed during a relatively short protocol via glucose, insulin and C-peptide assays.
In this research, the DISST insulin sensitivity value was successfully validated against the gold standard euglycaemic clamp with a high correlation (R=0.82), a high insulin resistance diagnostic equivalence (ROC c-unit=0.96), and low bias (-10.6%). Endogenous insulin secretion metrics obtained via the DISST were able to describe clinically important distinctions in participant physiology that were not observed with euglycaemic clamp, and are not available via most established insulin sensitivity tests.
The quick dynamic insulin sensitivity test (DISTq) is a major extension of the DISST that uses the same protocol but uses only glucose assays. As glucose assays are usually available immediately, the DISTq is capable of providing insulin sensitivity results immediately after the final blood sample, creating a real-time clinical diagnostic. The DISTq correlated well with the euglycaemic clamp (R=0.76), had a high insulin resistance diagnostic equivalence (ROC c-unit=0.89), and limited bias (0.7%). These DISTq results meet or exceed the outcomes of most validation studies from established insulin sensitivity tests such as the IVGTT, HOMA and OGTT metrics. Furthermore, none of the established insulin sensitivity tests are capable of providing immediate or real-time results. Finally, and most of the established tests require considerably more intense clinical protocols than the DISTq.
A range of DISST-based tests that used the DISST protocol and varying assay regimens were generated to provide optimum compromises for any given clinical or screening application. Eight DISST-based variants were postulated and assessed via their ability to replicate the fully sampled DISST results. The variants that utilised insulin assays correlated well to the fully sampled DISST insulin sensitivity values R~0.90 and the variants that assayed C-peptide produced endogenous insulin secretion metrics that correlated well to the fully-sampled DISST values (R~0.90 to 1). By taking advantage of the common clinical protocol, tests in the spectrum could be used in a hierarchical system. For example, if a DISTq result is close to a diagnostic threshold, stored samples could be re-assayed for insulin, and the insulin sensitivity value could be ‘upgraded’ without an additional protocol. Equally, adding C-peptide assays would provide additional insulin secretion information. Importantly, one clinical procedure thus yields potentially several test results.
In-silico investigations were undertaken to evaluate the efficacy of two additional, specific DISTq protocol variations and to observe the pharmacokinetics of anti-diabetic drugs. The first variation combined the boluses used in the DISTq and reduced the overall test time to 20 minutes with only two glucose assays. The results of this investigation implied no significant degradation of insulin sensitivity values is caused by the change in protocol and suggested that clinical trials of this protocol are warranted. The second protocol variant added glucose content to the insulin bolus to enable observation of first phase insulin secretion concurrently with insulin sensitivity from glucose data alone. Although concurrent observation was possible without simulated assay noise, when clinically realistic noise was added, model identifiability was lost. Hence, this protocol is not recommended for clinical investigation.
Similar analyses are used to apply the overall dynamic, model-based clinical test approach to other therapeutics. In-silico analysis showed that although the pharmacokinetics of insulin sensitizers drugs were described well by the dynamic protocol. However, the pharmacokinetics of insulin secretion enhancement drugs were less observable.
The overall thesis is supported by a common model parameter identification method. The iterative integral parameter identification method is a development of a single, simple integral method. The iterative method was compared to the established non-linear Levenberg-Marquardt parameter identification method. Although the iterative integral method is limited in the type of models it can be used with, it is more robust, accurate and less computationally intense than the Levenberg-Marquardt method.
Finally, a novel, integral-based method for the evaluation of a-priori structural model identifiability is also presented. This method differs significantly from established, derivative based approaches as it accounts for sample placement, measurement error, and probable system responses. Hence, it is capable of defining the true nature of identifiability, which is analogous, not binary as assumed by the established methods.
The investigations described in this thesis were centred on model-based insulin sensitivity and secretion identification from dynamic insulin sensitivity tests with a strong focus on maximising clinical efficacy. The low intensity and informative DISST was successfully validated against the euglycaemic clamp. DISTq further reduces the clinical cost and burden, and was also validated against the euglycaemic clamp. DISTq represents a new paradigm in the field of low-cost insulin sensitivity testing as it does not require insulin assays. A number of in-silico investigations were undertaken and provided insight regarding the suitability of the methods for clinical trials. Finally, two novel mathematical methods were developed to identify model parameters and asses their identifiability, respectively.
|
748 |
Developing and validating a new comprehensive glucose-insulin pharmacokinetics and pharmacodynamics modelJamaludin, Ummu January 2013 (has links)
Type 2 diabetes has reached epidemic proportions worldwide. The resulting increase in chronic and costly diabetes related complications has potentially catastrophic implications for healthcare systems, and economics and societies as a whole. One of the key pathological factors leading to type 2 diabetes is insulin resistance (IR), which is the reduced or impaired ability of the body to make use of available insulin to maintain safe glucose concentrations in the bloodstream.
It is essential to understand the physiology of glucose and insulin when investigating the underlying factors contributing to chronic diseases such as diabetes and cardiovascular disease. For many years, clinicians and researchers have been working to develop and use model-based methods to increase understanding and aid therapeutic decision support. However, the majority of practicable tests cannot yield more than basic metrics that allow only a threshold-based assessment of the underlying disorder.
This thesis gives an overview on several dynamic model-based methodologies with different clinical applications in assessing glycaemia via measuring effects of treatment or medication on insulin sensitivity. Other tests are clinically focused, designed to screen populations and diagnose or detect the risk of developing diabetes. Thus, it is very important to observe sensitivity metrics in various clinical and research settings.
Interstitial insulin kinetics and their influence on model-based insulin sensitivity observation was analysed using data from the clinical pilot study of the dynamic insulin sensitivity and secretion (DISST) test and the glucose-insulin PK-PD models. From these inputs, a model of interstitial insulin dose-response that best links insulin action in plasma to response in blood glucose levels was developed. The critical parameters influencing interstitial insulin pharmacokinetics (PKs) are saturation in insulin receptor binding (αG) and the plasma-interstitium diffusion rate (nI). Population values for these parameters are found to be [αG, nI]=[0.05,0.055].
Critically ill patients are regularly fed via constant enteral (EN) nutrition infusions. The impact of incretin effects on endogenous insulin secretion in this cohort remains unclear. It is hypothesised that the identified SI would decrease during interruptions of EN and would increase when EN is resumed, where, for short periods around transition, the true patient SI would be assumed constant. The model-based analysis was able to elucidate incretin effects by tracking the identified model-based insulin sensitivity (SI) in a cohort of critically ill patients. Thus, changes in model-based SI given the fixed assumed endogenous secretion by the model would support the presence of an EN-related incretin effect in the population of non-diabetic, critically ill patients studied.
The PD feedback-control model of Uen was designed to investigate endogenous insulin secretion amongst subjects with different metabolic states and levels of insulin resistance. The underlying effects that influence insulin secretion i.e. incretin effects were also defined by tracking the control model gain/response and the identified insulin sensitivity (SI) using intravenous (IV) bolus and oral glucose responses of insulin sensitivity tests. This new PD control model allowed the characterisation of both static (basal) and dynamic insulin responses, which defined the pancreatic β-cell glucose sensitivity parameters. However, incretin effects were unobserved during oral glucose responses as the PD control gains failed to simulate the true endogenous insulin secretion due to potentially inaccurate glucose appearance rates and low data resolution of glucose concentrations.
The net effect of haemodialysis (HD) treatment on glycaemic regulation and insulin sensitivity in a critically ill cohort was investigated. It was hypothesized that the observed SI would decrease during HD due to enhanced insulin clearance compared to the model, and would be recaptured again when HD is stopped. The changes in model-based SI metric at HD transitions in a cohort of critically ill patients were evaluated. Significant changes of -29% in model-based SI was observed during HD therapy. However, there were insignificant changes when HD treatment was ended. Thus, the changes in model-based SI would thus offer a unique observation on insulin kinetics and action in this population of critically ill patients with ARF that would better inform metabolic care.
|
749 |
Optimal shape design based on body-fitted grid generation.Mohebbi, Farzad January 2014 (has links)
Shape optimization is an important step in many design processes. With the growing use of Computer Aided Engineering in the design chain, it has become very important to develop robust and efficient shape optimization algorithms. The field of Computer Aided Optimal Shape Design has grown substantially over the recent past. In the early days of its development, the method based on small shape perturbation to probe the parameter space and identify an optimal shape was routinely used. This method is nothing but an educated trial and error method. A key development in the pursuit of good shape optimization algorithms has been the advent of the adjoint method to compute the shape sensitivities more formally and efficiently. While undoubtedly, very attractive, this method relies on very sophisticated and advanced mathematical tools which are an impediment to its wider use in the engineering community. It that spirit, it is the purpose of this thesis to propose a new shape optimization algorithm based on more intuitive engineering principles and numerical procedures. In this thesis, the new shape optimization procedure which is proposed is based on the generation of a body-fitted mesh. This process maps the physical domain into a regular computational domain. Based on simple arguments relating to the use of the chain rule in the mapped domain, it is shown that an explicit expression for the shape sensitivity can be derived. This enables the computation of the shape sensitivity in one single solve, a performance analogous to the adjoint method, the current state-of-the art. The discretization is based on the Finite Difference method, a method chosen for its simplicity and ease of implementation. This algorithm is applied to the Laplace equation in the context of heat transfer problems and potential flows. The applicability of the proposed algorithm is demonstrated on a number of benchmark problems which clearly confirm the validity of the sensitivity analysis, the most important aspect of any shape optimization problem. This thesis also explores the relative merits of different minimization algorithms and proposes a technique to “fix” meshes when inverted element arises as part of the optimization process. While the problems treated are still elementary when compared to complex multiphysics engineering problems, the new methodology presented in this thesis could apply in principle to arbitrary Partial Differential Equations.
|
750 |
ACHIEVING CROSS-CULTURAL COMPETENCE IN THE CLASSROOM: CULTURE'S WAYS EXPLOREDSchaeffer, Janna Orlova January 2011 (has links)
Over the course of the last few decades the debate over culture and its relationship to language has remained heated and one can argue, unresolved. It has been underscored that it is not necessarily the question of culture teaching per-se but rather the methods and content of such teaching that remain controversial. Today's world demands that learners are not simply linguistically but also interculturally competent. It has been argued that high levels of intercultural awareness can be achieved with the help of experiential lessons taught in a formal setting that focus on the exploration of self as a cultural being.In this study, three groups of the intermediate learners of German and Russian were invited to participate in a number of cultural lessons based on either culture box highlights or experiential activities. The pre-posttests measured changes in learners' cognitive, behavioral and affective measures of intercultural competence. Results revealed that experiential activities tend to better facilitate the development of learners' intercultural skills and attitudes. Students written responses to critical incidents were analyzed with the Developmental Model of Intercultural Sensitivity (Bennett, 1993) to assess changes in learners' perspectives and intercultural disposition over the course of the semester. Additionally, learners' experiences with foreign and local cultures were quantified and correlated with cognitive, behavioral and affective measures of intercultural competence. Results showed that not all measures of intercultural competence may be broadened by the individuals' firsthand experiences with other cultures. The relevance of one's previous experiences with `sub-cultures' (states, cities, towns, and communities), i.e. his `mobility' must also be acknowledged.
|
Page generated in 0.0848 seconds