301 |
Predicting effectiveness of potential teachers /Kanitz, Hugo E. January 1971 (has links)
No description available.
|
302 |
The relation of non-intellective factors to the academic achievement of college freshmen at the Ohio State University /Dohnayi, Julius Salacaz January 1966 (has links)
No description available.
|
303 |
Determinants of early labor market success among young men : race, ability, quantity and quality of schooling /Kohen, Andrew I. January 1973 (has links)
No description available.
|
304 |
A study of relationships among selected teacher variables and expressed preferences for student-centered, non-direct science education /Shay, Edwin Lawrence January 1974 (has links)
No description available.
|
305 |
Assessment center feedback and personnel development /Durham, Charles Vernon January 1978 (has links)
No description available.
|
306 |
Cognitive Vulnerability and the Actuarial Prediction of Depressive CourseGrant, David Adam January 2012 (has links)
A wealth of research indicates that depression is a serious global health issue, and that it is often characterized by a complicated and varied course. The ability to predict depressive course would be tremendously valuable for clinicians. However, the extant literature has not yet produced an accurate and efficient means by which to predict the course of depression. Research also indicates that cognitive variables - and cognitive vulnerability factors in particular - are related to the course of depression. In examining data provided by participants in the Temple-Wisconsin Cognitive Vulnerability to Depression Project (N = 345), the current study aimed to elucidate the relationship between cognitive vulnerability and depressive course using an actuarial statistical method. Results indicated that several cognitive measures predicted aspects of the onset and course of depression at rates significantly better than chance; foremost among these was the Cognitive Style Questionnaire (CSQ; Alloy et al., 2000). The CSQ was found to be the variable that best differentiated between participants who developed an episode of depression and those who did not. Furthermore, in comparison to participants who did not develop an episode of depression, the CSQ was found to differentiate between participants who recovered from a given depressive episode and those who did not, as well as between participants who experienced a single episode and those experiencing a recurrent course of the disorder across the prospective phase of the study. Conceptual and clinical implications of these results are discussed, as are directions for future research. / Psychology
|
307 |
Mean Velocity Prediction and Evaluation of k-E Models in Turbulent Diffuser FlowsKopp, Gregory 09 1900 (has links)
Eight decreasing adverse pressure gradient flows, and the similar regions of an initially increasing adverse pressure gradient flow, are examined in terms of the two experimentally observed half-power regions. The existing semi-empirical and analytical mean velocity profiles are examined and their range of applicability is determined in terms of the ratio of outer to inner half-power slopes. Three variations of the k-e model of turbulence are evaluated in terms of how well they predict the turbulence field in an eight degree conical diffuser. The model of Nagano and Tagawa (1990) is seen to be superior to the others. It is possible for Nagano and Tagawa’s model to yield reasonable prediction of k and E because they implemented the Hanjalic and Launder (1980) modification for the irrotational strains. However, the k-e models prediction of the Reynolds stresses is poor. / Thesis / Master of Engineering (ME)
|
308 |
Re-thinking termination guarantee of eBPFSahu, Raj 10 June 2024 (has links)
In the rapidly evolving landscape of BPF as kernel extensions, where the industry is deploying an increasing count of simultaneously running BPF programs, the need for accounting BPF- induced overhead on latency-sensitive kernel functions is becoming critical. We also find that eBPF's termination guarantee is insufficient to protect systems from BPF programs running extraordinarily long due to compute-heavy operations and runtime factors such as contention. Operators lack a crucial mechanism to identify and avoid installing long-running BPF programs while also requiring a mechanism to abort such BPF programs when found to be adding high latency overhead on performance-critical kernel functions. In this work, we propose a runtime estimator and a dynamic termination mechanism to solve these two issues, respectively. We use a hybrid of static and dynamic analysis to provide a runtime range that we demonstrate to encompass the actual runtime of the BPF program. For safe BPF termination, we propose a short-circuiting approach to skip all costly operations and quickly reach completion. We evaluate the proposed solutions to find the obtained performance estimate as too broad, but when paired with the dynamic termination, can be used by a BPF Orchestrator to impose policies on the overhead due to BPF programs in a call path. The proposed dynamic termination solution has zero overhead on BPF programs for no-termination cases while having a verification overhead proportional to the number of helper calls in a BPF program. In the future, we aim to make BPF execution atomic to guarantee that kernel objects modified within a BPF program are always left in a consistent state in the event of program termination. / Master of Science / The Linux kernel OS has a relatively recent feature called eBPF which allows adding new code into a running system without needing a system reboot. Due to the flexibility offered by eBPF, the technology is attracting widespread adoption for diverse use cases such as system health monitoring, security, accelerating programs, etc. In this work, we identify that eBPF programs have a non-negligible performance impact on a system which, in the extreme case, can cause Denial-of-Service attacks on the host machine despite going through all security checks enforced by eBPF. We propose a two-part solution: the eBPF runtime estimator and the Fast-Path termination mechanism. The runtime estimator aims to prevent the instal- lation of eBPF programs that can cause a large performance impact, while the Fast-Path termination will act as a safety net for cases when the installed program unexpectedly runs longer. The overall solution will enable better management of eBPF programs concerning their performance impact and enforce strict bounds on the added latency. Potential future work includes factoring in the impacts other than performance in our solution such as inter- BPF interaction and designing easy-to-use knobs which an operator can easily tune to relax or constrain the side-effects of the eBPF programs installed in the system.
|
309 |
Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of UncertaintyWhiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
|
310 |
Finger force capability: measurement and prediction using anthropometric and myoelectric measuresAstin, Angela DiDomenico 14 January 2000 (has links)
Hand and finger force data are used in many settings, including industrial design and indicating progress during rehabilitation. The application of appropriate work design principles, during the design of tools and workstations that involve the use of the hand and fingers, may minimize upper extremity injuries within the workplace. Determination and integration of force capabilities and requirements is an essential component of this process. Available data in the literature has focused primarily on whole-hand or multi-digit pinch exertions. The present study compiled and examined maximal forces exerted by the fingers in a variety of couplings to both enhance and supplement available data. This data was used to determine whether finger strength could be predicted from other strength measures and anthropometry. In addition, this study examined whether exerted finger forces could be estimated using surface electromyography obtained from standardized forearm locations. Such processes are of utility when designing and evaluating hand tools and human-machine interfaces involving finger intensive tasks, since the integration of finger force capabilities and task requirements are necessary to reduce the risk of injury to the upper limbs.
Forces were measured using strain gauge transducers, and a modification of standard protocols was followed to obtain consistent and applicable data. Correlations within and among maximum finger forces, whole-hand grip force, and anthropometric measures were examined. Multiple regression models were developed to determine the feasibility of predicting of finger strength in various finger couplings from more accessible measures. After examining a wide variety of such mathematical models, the results suggest that finger strength can be predicted from easily obtained measures with only moderate accuracy (R²-adj: 0.45 - 0.64; standard error: 11.95N - 18.88N). Such models, however, begin to overcome the limitations of direct finger strength measurements of individuals.
Surface electrodes were used to record electromyographic signals collected from three standardized electrode sites on the forearm. Multiple linear regression models were generated to predict finger force levels with the three normalized electromographic measures as predictor variables. The results suggest that standardized procedures for obtaining EMG data and simple linear models can be used to accurately predict finger forces (R²-adj: 0.77 - 0.88; standard error: 9.21N - 12.42N) during controlled maximal exertions. However, further work is needed to determine if the models can be generalized to more complex tasks. / Master of Science
|
Page generated in 0.037 seconds