Spelling suggestions: "subject:"[een] SCALE"" "subject:"[enn] SCALE""
51 |
Multi-scale Modeling of Chemical Vapor Deposition: From Feature to Reactor ScaleJilesen, Jonathan January 2009 (has links)
Multi-scale modeling of chemical vapor deposition (CVD) is a very broad topic because a large number of physical processes affect the quality and speed of film deposition. These processes have different length scales associated with them creating the need for a multi-scale model. The three main scales of importance to the modeling of CVD are the reactor scale, the feature scale, and the atomic scale. The reactor scale ranges from meters to millimeters and is called the reactor scale because it corresponds with the scale of the reactor geometry. The micrometer scale is labeled as the feature scale in this study because this is the scale related to the feature geometries. However, this is also the scale at which grain boundaries and surface quality can be discussed. The final scale of importance to the CVD process is the atomic scale.
The focus of this study is on the reactor and feature scales with special focus on the coupling between these two scales. Currently there are two main methods of coupling between the reactor and feature scales. The first method is mainly applied when a modified line of sight feature scale model is used, with coupling occurring through a mass balance performed at the wafer surface. The second method is only applicable to Monte Carlo based feature scale models. Coupling in this second method is accomplished through a mass balance performed at a plane offset from the surface.
During this study a means of using an offset plane to couple a continuum based reactor/meso scale model to a modified line of sight feature scale model was developed. This new model is then applied to several test cases and compared with the surface coupling method. In order to facilitate coupling at an offset plane a new feature scale model called the Ballistic Transport with Local Sticking Factors (BTLSF) was developed. The BTLSF model uses a source plane instead of a hemispherical source to calculate the initial deposition flux arriving from the source volume. The advantage of using a source plane is that it can be made to be the same plane as the coupling plane. The presence of only one interface between the feature and reactor/meso scales simplifies coupling. Modifications were also made to the surface coupling method to allow it to model non-uniform patterned features.
Comparison of the two coupling methods showed that they produced similar results with a maximum of 4.6% percent difference in their effective growth rate maps. However, the shapes of individual effective reactivity functions produced by the offset coupling method are more realistic, without the step functions present in the effective reactivity functions of the surface coupling method. Also the cell size of the continuum based component of the multi-scale model was shown to be limited when the surface coupling method was used.
Thanks to the work done in this study researchers using a modified line of sight feature scale model now have a choice of using either a surface or an offset coupling method to link their reactor/meso and feature scales. Furthermore, the comparative study of these two methods in this thesis highlights the differences between the two methods allowing their selection to be an informed decision.
|
52 |
The Development of Self-action Control QuestionnaireTsai, Chu-Chu 09 August 2012 (has links)
The purpose of this study s to construct a questionnaire about Self-action control of Academic Performance for college students based on Kuhl and Kraska (1989) action control theory. The sampling method of this study used purposive sampling, randomly selected from the freshman or sophomore students to various departments of Sun Yat-sen University, with a total of 409 examinees. After four expert validity, experts on the appropriateness of the questionnaire with content review and suggest modifications to draw the contents of the pre-test about 78 items . Measuring method is adopted online computer systems connection randomly selected to scale the three dimensions of 10 questions given to the college students. The contents of the Scale have three dimensions, pre-analysis of the results of the RSM mode in the ConQuest software, there are 11 questions beyond the adaptation target range, and finally the good adaptation of Formal items are 67 items. The overall item difficulty is too easy. The results of the three sub-scale reliability are 0.62, 0.65 and 0.53. Based on the findings, it is recommended more difficult items can be included to the questionnaire in the future. developed.
|
53 |
The Compassion ScalePommier, Elizabeth Ann 09 February 2011 (has links)
These studies define a Buddhist conceptualization of compassion and describe the development of the Compassion Scale. The definition of compassion was adopted from Neff's (2003) model of self-compassion that proposes that the construct entails kindness, common humanity, and mindfulness. The six-factor structure was adopted from the Self-Compassion Scale (2003) representing positively and negatively worded items of the three components proposed to entail compassion. The six-factors for compassion are named: kindness vs. indifference, common humanity vs. separation, and mindfulness vs. disengagement. Study 1 was conducted to provide support for content validity. Study 2 was conducted to provide initial validation for the scale. Study 3 was conducted to cross-validate findings from the second study. Results provide evidence for the structure of the scale. Cronbach's alpha and split-half estimates suggest good reliability for both samples. Compassion was significantly correlated with compassionate love, wisdom, social connectedness, and empathy providing support for convergent validity. Factor analysis in both samples indicated good fit using Hu & Bentler (1998) criteria. Results suggest that the Compassion Scale is a psychometrically sound measure of compassion. Given that Buddhist concepts of compassion are receiving increased attention in psychology (e.g. Davidson, 2006; Gilbert, 2005, Goetz, 2010) this scale will hopefully prove useful in research that examines compassion from a non-Western perspective. / text
|
54 |
A systematic review of the effectiveness of the Gonstead techniqueHarrison, Michael R. 25 July 2014 (has links)
Submitted in partial compliance with the requirements for the Master’s Degree in Technology: Chiropractic, Durban University of Technology, 2014. / Background: Practitioners are required to practice evidence-based medicine. The availability of large volumes of information make this practice style difficult for the practitioner. However, a systematic review allows literature to be organised, rated and allows current, abbreviated research resources for practitioner in clinical practice.
Objectives: The effectiveness of the Gonstead Chiropractic Technique (GCT) was evaluated to present current evidence available for various conditions for which the GCT is utilised in clinical practice. Thus, the aim of the study was to systematically review, collate and evaluate the research evidence in the literature to determine the effectiveness of the GCT.
Method: A literature search was conducted, based on key terms including: Gonstead and manual, Gonstead and technique, and Gonstead and manipulative/manipulation. Databases searched were: CINAHL Plus, Google Scholar, MEDLINE, Metalib, Pubmed, Science Direct, Springerlink and Summons. The articles were screened according to inclusion and exclusion criteria, after which secondary hand and reference searches were done. Thereafter the articles were reviewed by six independent reviewers. Appropriate scales were used to rate the methodological rigour of each article (e.g. PEDro). The results were analysed and ranked, before these outcomes were classified and contextualised in the clinical conditions on which the included studies were based.
Results: A total of 477 citations were identified; after screening 26 English articles remained. Two articles were added through the secondary hand-search. Limited to no evidence existed for the effectiveness of GCT for neck pain / headache / face pain and limited evidence existed for gynaecological issues, scoliosis, neurological disorders, fractures, blood pressure and physiological presentations. Consensus was evident for gynaecological issues, neurological disorders, fractures (with the exception of the undiagnosed fracture) and physiological presentations, whereas the neck pain / headache / face pain and scoliosis were conflicting.
Conclusion: Limited evidence shows a need for future studies with stringent methodological rigour, so as to investigate the appropriateness / inappropriateness of the use of the GCT. The lack of evidence for GCT may compromise appropriate informed consent and treatment. Therefore practitioners are encouraged to use appropriate and validated tools to measure the patient’s clinical progress
|
55 |
Climatic influences on the grapevine: a study of viticulture in the Waipara basinSluys, Shona Lee January 2006 (has links)
Climate is one of the most important factors influencing where wine grapes can be grown and the quality of wine produced from those grapes. A plants habitat has a profound influence on its growth and development. The surrounding climatic conditions at both the macro- and meso-scales influence the plant-climate miro-scale interactions. The main study site is the McKenzie Vineyard that is owned by Torlesse Wines. The climatic conditions of the surrounding Waipara region was also studied using climate data from the following vineyards; Canterbury House, River Terrace and Waipara West. The overall aim of this research is to improve understanding of the influence of the climatic environment on grapevine development at the meso- to micro-scale. The main findings of the research were firstly, that the most important climatic factor influencing grapevine development and growth is temperature and secondly that there is variability in the temperature across the Waipara Basin. Future research should be conducted for the entire growth season to gain a better understanding of how temperature influences the development of grapevine over the growing season as a whole.
|
56 |
On the searching efficiency of "Rodolia cardinalis" (Milsant) (Coleoptera, Coccinellidae) and its response to prey patches /Prasad, Yugal Kishore. January 1985 (has links) (PDF)
Thesis (Ph. D.)--University of Adelaide, 1985. / Includes bibliographical references.
|
57 |
Integrating local information for inference and optimization in machine learningZhu, Zhanxing January 2016 (has links)
In practice, machine learners often care about two key issues: one is how to obtain a more accurate answer with limited data, and the other is how to handle large-scale data (often referred to as “Big Data” in industry) for efficient inference and optimization. One solution to the first issue might be aggregating learned predictions from diverse local models. For the second issue, integrating the information from subsets of the large-scale data is a proven way of achieving computation reduction. In this thesis, we have developed some novel frameworks and schemes to handle several scenarios in each of the two salient issues. For aggregating diverse models – in particular, aggregating probabilistic predictions from different models – we introduce a spectrum of compositional methods, Rényi divergence aggregators, which are maximum entropy distributions subject to biases from individual models, with the Rényi divergence parameter dependent on the bias. Experiments are implemented on various simulated and real-world datasets to verify the findings. We also show the theoretical connections between Rényi divergence aggregators and machine learning markets with isoelastic utilities. The second issue involves inference and optimization with large-scale data. We consider two important scenarios: one is optimizing large-scale Convex-Concave Saddle Point problem with a Separable structure, referred as Sep-CCSP; and the other is large-scale Bayesian posterior sampling. Two different settings of Sep-CCSP problem are considered, Sep-CCSP with strongly convex functions and non-strongly convex functions. We develop efficient stochastic coordinate descent methods for both of the two cases, which allow fast parallel processing for large-scale data. Both theoretically and empirically, it is demonstrated that the developed methods perform comparably, or more often, better than state-of-the-art methods. To handle the scalability issue in Bayesian posterior sampling, the stochastic approximation technique is employed, i.e., only touching a small mini batch of data items to approximate the full likelihood or its gradient. In order to deal with subsampling error introduced by stochastic approximation, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. This method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
|
58 |
The evaluation of accounting-based valuation models in the UKShen, Yun January 2010 (has links)
This study provides two empirical studies in market-based accounting research. One study focuses on using out-of-sample valuation errors to evaluate various estimation approaches for firm-valuation models. The second empirical study uses portfolio analysis to evaluate an empirical accounting-based firm valuation model developed in the UK context.The first study uses out-of-sample valuation errors as an alternative metric capturing the effectiveness of various estimation approaches in generating reliable estimates of coefficients in accounting-based valuation models and, accordingly, less valuation bias and higher valuation accuracy. Valuation bias is expressed as the mean proportional valuation error, where estimated market value less the actually observed market value divided by the actual market value is the proportional valuation error, and valuation accuracy is measured by both the mean absolute and the mean squared proportional valuation error. We find that deflating the full equation including the constant term of the undeflated model and, hence, estimating without a constant term in the deflated model provides less bias and more accurate value estimates relative to including a constant term in the regression equation. Also estimating the valuation model on high- and low-intangible asset firms separately, instead of pooling the full sample for estimation, provides better performance in all cases. As expected, the results suggest that an extended model including the main accounting variables found to be associated with market value in the UK is better specified than a benchmark model, widely adopted in prior research, where market value is regressed on book value and earnings alone. Inclusion of 'other information' also seems to improve the performance of the models. However, there is no clear evidence that one particular deflator out of the five we investigate outperforms the others, although book value and opening and closing market value appear to generally perform better than sales and number of shares.The second empirical study tests for the existence of a 'mispricing' effect associated with accounting-based valuation models in the UK. It investigates a specific firm valuation model where market value is expressed as a linear combination of book value, earnings, research and development expenditures, dividends, capital contributions, capital expenditures and other information. All these accounting variables have been found value-relevant in prior studies in the UK. Firms are ranked by in-sample proportional valuation errors. Results show that although firms in the higher rank deciles tend to have higher abnormal returns than firms in the lower rank deciles, the difference between the two extreme portfolios (or the hedge returns) is statistically insignificant. As a consequence, accounting-based valuation models do not seem to provide superior estimates of intrinsic value to market values. We can conclude that the UK stock market is semi-strong form efficient, in the sense that it does not appear to be possible to generate positive abnormal returns based upon publicly available accounting information embedded in the valuation models studied.
|
59 |
Mesoscale Eddy Dynamics and Scale in the Red SeaCampbell, Michael F 12 1900 (has links)
Recent efforts in understanding the variability inherent in coastal and offshore waters have highlighted the need for higher resolution sampling at finer spatial and temporal resolutions. Gliders are increasingly used in these transitional waters due to their ability to provide these finer resolution data sets in areas where satellite coverage may be poor, ship-based surveys may be impractical, and important processes may occur below the surface. Since no single instrument platform provides coverage across all needed spatial and temporal scales, Ocean Observation systems are using multiple types of instrument platforms for data collection. However, this results in increasingly large volumes of data that need to be processed and analyzed and there is no current “best practice” methodology for combining these instrument platforms. In this study, high resolution glider data, High Frequency Radar (HFR), and satellite-derived data products (MERRA_2 and ARMOR3D NRT Eddy Tracking) were used to quantify: 1) dominant scales of variability of the central Red Sea, 2) determine the minimum sampling frequency required to adequately characterize the central Red Sea, 3) discriminate whether the fine scale persistency of oceanographic variables determined from the glider data are comparable to those identified using HFR and satellite-derived data products, and 4) determine additional descriptive information regarding eddy occurrence and strength in the Red Sea from 2018-2019. Both Integral Time Scale and Characteristic Length Scale analysis show that the persistence time frame from glider data for temperature, salinity, chlorophyll-α, and dissolved oxygen is 2-4 weeks and that these temporal scales match for HFR and MERRA_2 data, matching a similar description of a ”weather-band” level of temporal variability. Additionally, the description of eddy activity in the Red Sea also supports this 2-4-week time frame, with the average duration of cyclonic and anticyclonic eddies from 2018-2019 being 22 and 27 days, respectively. Adoption of scale-based methods across multiple ocean observation areas can help define “best practice” methodologies for combining glider, HFR, and satellite-derived data to better understand the naturally occurring variability and improve resource allocation.
|
60 |
Exploring the suitability of rating scales for measuring bullying among Grade 4 learnersNchoe, Katlego Elaine January 2017 (has links)
The purpose of this quantitative study was to investigate which bullying rating scale, between the Likert Scale (LS) and the Visual Analogue Scale (VAS), is more appropriate for Grade 4 learners. Although literature verifies the reliability of these two rating scales used to measure bullying in young children, the validity and the suitability of these instruments for young learners has not been extensively explored in the South African context. The concern with bullying in this study has to do with the need for the accurate assessment/measurement of bullying, since a proper understanding of bullying depends on the accuracy of the instrument used. Against this backdrop, this study employed a survey design, rooted in a post-positivist conceptualisation of bullying, using a bullying questionnaire. The study’s questionnaire consisted of both LS and VAS response options, and was used to measure both the bully and the victims’ response option preferences (LS versus VAS), in addition to assessing the reliability and validity of both response options. A class of Grade 4 learners from one Model C school formed part of the survey and those who were willing to participate completed the Learner Bullying Questionnaire (LBQ). The school was selected using a purposive, non-probability sampling method based on the geographical area, the in addition to the incidence of bullying and diversity of the school population. The quantitative data obtained from the survey design questionnaires were analysed statistically using descriptive statistics as well as the Spearman correlation coefficient to determine the correlation between the VAS and LS responses for each question presented. Using the Wilcoxon tes, the differences between the two response options were determined (i.e. the variances in the preference scores and difficulty scores of the Grade 4 learners for the two response options). The results of the LBQ show no significant difference of scale preference for the Grade 4 learners. However, the learners - in the six scale preference questions included near the end of the LBQ - indicated that they preferred the VAS over the LS. / Dissertation (MEd)--University of Pretoria, 2017. / Educational Psychology / MEd / Unrestricted
|
Page generated in 0.0518 seconds