Spelling suggestions: "subject:"[een] LINEAR MODELS"" "subject:"[enn] LINEAR MODELS""
351 |
Assessment of the Extent of Agreement on the Implementation of Instructional Design Principles Among Corporate Training and Development ExpertsGrovdahl, Elba C. 01 January 1987 (has links) (PDF)
A sample of corporate instructional designers and professors of instructional design completed the "Corporate Instructional Design Scale." The data yielded information on the extent of agreement that descriptive statements identified conventionally and systematically designed instruction.
Descriptive and asymmetric log linear (statistical) analyses were conducted. In the asymmetric log linear analyses, the extent of agreement was used as the dependent variable. The three independent variables with three levels each were Program type (conventionally designed instruction, both conventionally and systematically designed instruction, and systematically designed instruction), Instructional component (instructional intents, instructional strategies, and instructional assessments), and Trainer type (professional trainers in manufacturing, professional trainers in non-manufacturing, and professors of instructional design). The asymmetric log linear analysis using 16 models was a 3x3x3x3 factorial design.
The extent of agreement on the indicators of conventional instruction was lower than the extent of agreement on the indicators of systematic instruction. The extent of agreement for instructional assessment indicators was lower than the extent of agreement for instructional intents and strategies. There were only minor differences between the extent of agreement on indicators classified as intents and indicators classified as strategies. the extent of agreement on the indicators which differentiated conventionally and systematically designed instruction was higher for the professors of instructional design than for the trainers in manufacturing and non-manufacturing companies.
Study results should be carefully considered by professors of instructional design when designing their instructional design courses. The high extent of agreement by professors of instructional design on items that distinguished conventional instruction and systematic instruction suggest that academia is fairly clear about the indicators of instructional design, specially instructional intents and instructional strategies, while the practitioners of instructional design have a substantially lower extent of agreement. These results suggest at least two conclusions. First, the academic world of instructional design is not in tune with the corporate world. Academia has been promoting idealized procedures for instructional design, while practitioners have adjusted their instructional designed to corporate realities of time and cost. Second, corporate instructional designers have found academic world suggestions unrealistic. Corporate instructional designers have made modifications to their instructional designs. Their instructional designs may actually only approximate whatever type of instruction the professional trainers or corporation where they are employed may advocate.
|
352 |
Bayesian Analysis of Temporal and Spatio-temporal Multivariate Environmental DataEl Khouly, Mohamed Ibrahim 09 May 2019 (has links)
High dimensional space-time datasets are available nowadays in various aspects of life such as economy, agriculture, health, environment, etc. Meanwhile, it is challenging to reveal possible connections between climate change and weather extreme events such as hurricanes or tornadoes. In particular, the relationship between tornado occurrence and climate change has remained elusive. Moreover, modeling multivariate spatio-temporal data is computationally expensive. There is great need to computationally feasible models that account for temporal, spatial, and inter-variables dependence. Our research focuses on those areas in two ways. First, we investigate connections between changes in tornado risk and the increase in atmospheric instability over Oklahoma. Second, we propose two multiscale spatio-temporal models, one for multivariate Gaussian data, and the other for matrix-variate Gaussian data. Those frameworks are novel additions to the existing literature on Bayesian multiscale models. In addition, we have proposed parallelizable MCMC algorithms to sample from the posterior distributions of the model parameters with enhanced computations. / Doctor of Philosophy / Over 1000 tornadoes are reported every year in the United States causing massive losses in lives and possessions according to the National Oceanic and Atmospheric Administration. Therefore, it is worthy to investigate possible connections between climate change and tornado occurrence. However, there are massive environmental datasets in three or four dimensions (2 or 3 dimensional space, and time), and the relationship between tornado occurrence and climate change has remained elusive. Moreover, it is computationally expensive to analyze those high dimensional space-time datasets. In part of our research, we have found a significant relationship between occurrence of strong tornadoes over Oklahoma and meteorological variables. Some of those meteorological variables have been affected by ozone depletion and emissions of greenhouse gases. Additionally, we propose two Bayesian frameworks to analyze multivariate space-time datasets with fast and feasible computations. Finally, our analyses indicate different patterns of temperatures at atmospheric altitudes with distinctive rates over the United States.
|
353 |
Describing differences in weight and length growth trajectories between white and Pakistani infants in the UK: analysis of the Born in Bradford birth cohort study using multilevel linear spline modelsFairley, L., Petherick, E.S., Howe, L.D., Tilling, K., Cameron, N., Lawlor, D.A., West, Jane, Wright, J. January 2013 (has links)
No / OBJECTIVE: To describe the growth pattern from birth to 2 years of UK-born white British and Pakistani infants. DESIGN: Birth cohort. SETTING: Bradford, UK. PARTICIPANTS: 314 white British boys, 383 Pakistani boys, 328 white British girls and 409 Pakistani girls. MAIN OUTCOME MEASURES: Weight and length trajectories based on repeat measurements from birth to 2 years. RESULTS: Linear spline multilevel models for weight and length with knot points at 4 and 9 months fitted the data well. At birth Pakistani boys were 210 g lighter (95% CI -290 to -120) and 0.5 cm shorter (-1.04 to 0.02) and Pakistani girls were 180 g lighter (-260 to -100) and 0.5 cm shorter (-0.91 to -0.03) than white British boys and girls, respectively. Pakistani infants gained length faster than white British infants between 0 and 4 months (+0.3 cm/month (0.1 to 0.5) for boys and +0.4 cm/month (0.2 to 0.6) for girls) and gained more weight per month between 9 and 24 months (+10 g/month (0 to 30) for boys and +30 g/month (20 to 40) for girls). Adjustment for maternal height attenuated ethnic differences in weight and length at birth, but not in postnatal growth. Adjustment for other confounders did not explain differences in any outcomes. CONCLUSIONS: Pakistani infants were lighter and had shorter predicted mean length at birth than white British infants, but gained weight and length quicker in infancy. By age 2 years both ethnic groups had similar weight, but Pakistani infants were on average taller than white British infants.
|
354 |
Refractive indices used by the Haag-Streit Lenstar to calculate axial biometric dimensionsSuheimat, M., Verkicharla, P.K., Mallen, Edward A.H., Rozema, J.J., Atchison, D.A. 03 December 2014 (has links)
No / PURPOSE: To estimate refractive indices used by the Lenstar biometer to translate measured optical path lengths into geometrical path lengths within the eye. METHODS: Axial lengths of model eyes were determined using the IOLMaster and Lenstar biometers; comparing those lengths gave an overall eye refractive index estimate for the Lenstar. Using the Lenstar Graphical User Interface, we noticed that boundaries between media could be manipulated and opposite changes in optical path lengths on either side of the boundary could be introduced. Those ratios were combined with the overall eye refractive index to estimate separate refractive indices. Furthermore, Haag-Streit provided us with a template to obtain 'air thicknesses' to compare with geometrical distances. RESULTS: The axial length estimates obtained using the IOLMaster and the Lenstar agreed to within 0.01 mm. Estimates of group refractive indices used in the Lenstar were 1.340, 1.341, 1.415, and 1.354 for cornea, aqueous, lens, and overall eye, respectively. Those refractive indices did not match those of schematic eyes, but were close in the cases of aqueous and lens. Linear equations relating air thicknesses to geometrical thicknesses were consistent with our findings. CONCLUSION: The Lenstar uses different refractive indices for different ocular media. Some of the refractive indices, such as that for the cornea, are not physiological; therefore, it is likely that the calibrations in the instrument correspond to instrument-specific corrections and are not the real optical path lengths.
|
355 |
The effect of dietary estimates calculated using food frequency questionnaires on micronuclei formation in European pregnant women: a NewGeneris studyVande Loock, K., Botsivali, M., Zangogianni, M., Anderson, Diana, Baumgartner, Adolf, Fthenou, E., Chatzi, L., Marcos, R., Agramunt, S., Namork, E., Granum, B., Knudsen, L.E., Nielssen, J.K.S., Meltzer, H.M., Haugen, M., Kyrtopoulos, S.A., Decordier, I., Plas, G., Roelants, M., Merlo, F., Kleinjans, J.C., Kogevinas, M., Kirsch-Volders, M. 07 October 2014 (has links)
No / The use of biomarkers of early genetic effects, predictive for cancer, such as micronuclei (MN) in lymphocytes, may help to investigate the association between diet and cancer. We hypothesised that the presence of mutagens in the diet may increase MN formation. A 'pooled' standardised analysis was performed by applying the same experimental protocol for the cytokinesis block micronucleus assay in 625 young healthy women after delivery from five European study populations (Greece, Denmark, UK, Spain and Norway). We assessed MN frequencies in mono- and binucleated T-lymphocytes (MNMONO and MNBN) and the cytokinesis blocked proliferation index using a semi-automated image analysis system. Food frequency questionnaires (FFQs) were used to estimate intake of fatty acids and a broad range of immunotoxic and genotoxic/carcinogenic compounds through the diet. Pooled difference based on delivery type revealed higher MNMONO frequencies in caesarean than in vaginal delivery (P = 0.002). Statistical analysis showed a decrease in MNMONO frequencies with increasing calculated omega-6 PUFA concentrations and a decrease in MNBN frequencies with increasing calculated omega-3 PUFA concentrations. The expected toxic compounds estimated by FFQs were not associated with MN formation in mothers after delivery. In pregnant women, an omega-3 and -6 rich diet estimated by FFQ is associated with lower MN formation during pregnancy and delivery.
|
356 |
A qualitative study of the impact of organisational development interventions on the implementation of Outcomes Based EducationRamroop, Renuka Suekiah 30 November 2004 (has links)
Outcomes Based Education (OBE), has been, since its inception, fraught with problems. OBE in its very nature is complex. To fully embrace this method and ensure its success, schools must be able to make the necessary paradigm shift. This can only be achieved when schools receive relevant and empowering training, support and development. In other words, organisational development must be the key words. The aim of this study is to explore the impact of organisational development interventions on the implementation of OBE. The case study method was employed where it was realised that schools that received organisational development interventions, together with Outcomes Based Education, were able to implement this method with greater understanding, skill, and confidence.
The investigation recommends an organisational development design that could be used instead of the cascade model, and provides suggestions on what can be done to ensure a more successful implementation process. / Educational Studies / M. Ed (Education Management)
|
357 |
Statistical modelling of return on capital employed of individual unitsBurombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done.
The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with.
To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with.
Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with.
Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
|
358 |
An analytical approach to real-time linearization of a gas turbine engine modelChung, Gi Yun 22 January 2014 (has links)
A recent development in the design of control system for a jet engine is to use a suitable, fast and accurate model running on board. Development of linear models is particularly important as most engine control designs are based on linear control theory. Engine control performance can be significantly improved by increasing the accuracy of the developed model. Current state-of-the-art is to use piecewise linear models at selected equilibrium conditions for the development of set point controllers, followed by scheduling of resulting controller gains as a function of one or more of the system states. However, arriving at an effective gain scheduler that can accommodate fast transients covering a wide range of operating points can become quite complex and involved, thus resulting in a sacrifice on controller performance for its simplicity.
This thesis presents a methodology for developing a control oriented analytical linear model of a jet engine at both equilibrium and off-equilibrium conditions. This scheme requires a nonlinear engine model to run onboard in real time. The off-equilibrium analytical linear model provides improved accuracy and flexibility over the commonly used piecewise linear models developed using numerical perturbations. Linear coefficients are obtained by evaluating, at current conditions, analytical expressions which result from differentiation of simplified nonlinear expressions. Residualization of the fast dynamics states are utilized since the fast dynamics are typically outside of the primary control bandwidth. Analytical expressions based on the physics of the aerothermodynamic processes of a gas turbine engine facilitate a systematic approach to the analysis and synthesis of model based controllers. In addition, the use of analytical expressions reduces the computational effort, enabling linearization in real time at both equilibrium and off-equilibrium conditions for a more accurate capture of system dynamics during aggressive transient maneuvers.
The methodology is formulated and applied to a separate flow twin-spool turbofan engine model in the Numerical Propulsion System Simulation (NPSS) platform. The fidelity of linear model is examined by validating against a detailed nonlinear engine model using time domain response, the normalized additive uncertainty and the nu-gap metric. The effects of each simplifying assumptions, which are crucial to the linear model development, on the fidelity of the linear model are analyzed in detail. A case study is performed to investigate the case when the current state (including both slow and fast states) of the system is not readily available from the nonlinear simulation model. Also, a simple model based control is used to illustrate benefits of using the proposed modeling approach.
|
359 |
廣義線性模式下處理比較之最適設計 / Optimal Designs for Treatment Comparisons under Generalized Linear Models何漢葳, Ho, Han Wei Unknown Date (has links)
本研究旨在建立廣義線性模式下之D-與A-最適設計(optimal designs),並依不同處理結構(treatment structure)分成完全隨機設計(completely randomized design, CRD)與隨機集區設計(randomized block design, RBD)兩部分探討。
根據完全隨機設計所推導出之行列式的性質與理論結果,我們首先提出一個能快速大幅限縮尋找D-最適正合(exact)設計範圍的演算法。解析解的部分,則從將v個處理的變異數分為兩類出發,建立其D-最適近似(approximate)設計,並由此發現 (1) 各水準對應之樣本最適配置的上下界並非與水準間不同變異有關,而是與有多少處理之變異相同有關;(2) 即使是變異很大的處理,也必須分配觀察值,始能極大化行列式值。此意味著當v較大時,均分應不失為一有效率(efficient)的設計。至於正合設計,我們僅能得出某一處理特別大或特別小時的D-最適設計,並舉例說明求不出一般解的原因。
除此之外,我們亦求出當三個處理的變異數皆不同時之D-最適近似設計,以及v個處理皆不同時之A-最適近似設計。
至於最適隨機集區設計的建立,我們的重點放在v=2及v=3的情形,並假設集區樣本數(block size)為給定。當v=2時,各集區對應之行列式值不受其他集區的影響,故僅需依照完全隨機設計之所得,將各集區之行列式值分別最佳化,即可得出D-與A-最適設計。值得一提的是,若進一步假設各集區中兩處理變異的比例(>1)皆相同,且集區大小皆相同,則將各處理的「近似設計下最適總和」取最接近的整數,再均分給各集區,其結果未必為最適設計。當v=3時,即使只有2個集區,行列式也十分複雜,我們目前僅能證明當集區內各處理的變異相同時(不同集區之處理變異可不同),均分給定之集區樣本數為D-最適設計。當集區內各處理的變異不全相同時,我們僅能先以2個集區為例,類比完全隨機設計的性質,舉例猜想當兩集區中處理之變異大小順序相同時,各處理最適樣本配置的多寡亦與變異大小呈反比。由於本研究對處理與集區兩者之效應假設為可加,因此可合理假設集區中處理之變異大小順序相同。 / The problem of finding D- and A-optimal designs for the zero- and one-way elimination of heterogeneity under generalized linear models is considered. Since GLM designs rely on the values of parameters to be estimated, our strategy is to employ the locally optimal designs. For the zero-way elimination model, a theorem-based algorithm is proposed to search for the D-optimal exact designs. A formula for the construction of D-optimal approximate design when values of unknown parameters are split into two, with respective sizes m and v-m, are derived. Analytic solutions provided to the exact counterpart, however, are restricted to the cases when m=1 and m=v-1. An example is given to explain the problem involved.
On the other hand, the upper bound and lower bound of the optimal number of replicates per treatment are proved dependent on m, rather than the unknown parameters. These bounds imply that designs having as equal number of replications for each treatment as possible are efficient in D-optimality.
In addition, a D-optimal approximate design when values of unknown parameters are divided into three groups is also obtained. A closed-form expression for an A-optimal approximate design for comparing arbitrary v treatments is given.
For the one-way elimination model, our focus is on studying the D-optimal designs for v=2 and v=3 with each block size given. The D- and A-optimality for v=2 can be achieved by assigning units proportional to square root of the ratio of two variances, which is larger than 1, to the treatment with smaller variance in each block separately. For v=3, the structure of determinant is much more complicated even for two blocks, and we can only show that, when treatment variances are the same within a block, design having equal number of replicates as possible in each block is a D-optimal block design. Some numerical evidences conjecture that a design satisfying the condition that the number of replicates are inversely proportional to the treatment variances per block is better in terms of D-optimality, as long as the ordering of treatment variances are the same across blocks, which is reasonable for an additive model as we assume.
|
360 |
APC模型估計方法的模擬與實證研究 / Simulation and empirical comparisons of estimation methods for the APC model歐長潤, Ou, Chang Jun Unknown Date (has links)
20世紀以來,因為衛生醫療等因素的進步,各年齡死亡率均大幅下降,使得平均壽命大幅延長。壽命延長的效果近年逐漸顯現,其中的人口老化及其相關議題較受重視,因為人口老化已徹底改變國人的生活規劃,死亡率是否會繼續下降遂成為熱門的研究課題。描述死亡率變化的模型很多,近代發展的Age–Period–Cohort模型(簡稱APC模型),同時考慮年齡、年代與世代三個解釋變數,是近年廣受青睞的模型之一。這個模型將死亡率分成年齡、年代與世代三個效應,常用於流行病學領域,探討疾病、死亡率是否與年齡、年代、世代三者有關,但一般僅作為資料的大致描述,本研究將評估APC模型分析死亡率的可能性。
APC模型最大的問題在於不可甄別(Non–identification),即年齡、年代與世代三個變數存有共線性的問題,眾多的估計APC模型參數方法因應甄別問題而生。本研究預計比較七種較常見的APC模型估計方法,包括本質估計量(IE)、限制的廣義線性模型(cglim_age、cglim_period與cglim_cohort)、序列法ACP、序列法APC與自我迴歸模型(AR),以確定哪一種估計方法較為穩定,評估包括電腦模擬與實證分析兩部份。
電腦模擬部份比較各估計方法,衡量何者有較小的年齡別死亡率及APC參數的估計誤差;實證分析則考慮交叉分析,尋找用於死亡率預測的最佳估計方法。另外,也將以蒙地卡羅檢驗APC的模型假設,以確定這個模型的可行性。初步研究發現,以台灣死亡資料做為實證,本研究考量的估計方法在估計年齡別死亡率大致相當,只是在年齡–年代–世代這三者有不同的詮釋,且模型假設並非很符合。交叉分析上,Lee–Cater模型及其延展模型相對於APC模型有較小的預測誤差,整體顯示Lee–Cater 模型較佳。 / Since the beginning of the 20th century, the human beings have been experiencing longer life expectancy and lower mortality rates, which can attributed to constant improvements of factors such as medical technology, economics, and environment. The prolonging life expectancy has dramatically changed the life planning and life style after the retirement. The change would be even more severe if the mortality rates have larger reduction, and thus the study of mortality become popular in recent years. Many methods were proposed to describe the change of mortality rates. Among all methods, the Age-Period-Cohort model (APC) is a popular method used in epidemiology to discuss the relation between diseases, mortality rate, age, period and cohort.
Non-identification (i.e. collinearity) is a serious problem for APC model, and many methods used in the procedure included estimation of parameter. In the first part of this paper, we use simulation compare and evaluate popular estimation methods of APC model, such as Intrinsic Estimator (IE), constrained of age, period and cohort in the Generalized Linear Model (c–glim), sequential method, and Auto-regression (AR) Model. The simulation methods considered include Monte-Carlo and cross validation. In addition, the morality data in Taiwan (Data sources: Ministry of Interior), are used to demonstrate the validity and model assumption of these methods. In the second part of this paper, we also apply similar research method to the Lee-Carter model and compare it to the APC model. We found Lee–Carter model have smaller prediction errors than APC models in the cross–validation.
|
Page generated in 0.0696 seconds