Spelling suggestions: "subject:"delative efficiency."" "subject:"arelative efficiency.""
1 |
Measuring Relative Efficiency and Optimal Scale: An Application to Kaohsiung City Fire Prevention DivisionLin, Lien-shin 11 September 2007 (has links)
none
|
2 |
Application of Data Envelopment Analysis to Evaluate Efficiency of Nursing Units: Sample of Two Medical CentersAu, Wai-Yung 27 August 2003 (has links)
In the recent years most of the studies related to the hospital efficiency focus on the issues of the ownership, size, physician team work, and clinical performance. It is certain that nursing units are the major profit centers for hospitals. However, the relative efficiency of nursing units regarding the resource use and outputs is seldom investigated. Therefore the aim of this study is to investigate the influence of each input and output item on the relative efficiency using Data Envelopment Analysis (DEA). The research samples included 44 units of two government owned hospital accredited as medical centers in 2002. The research samples are divided into 5 groups. The inputs items include full time nurse, continuous education hours, cost expand, specialties, patient care hours and workload. The output items are length of stay, number of admission, occupancy rate and total patient days. Every nursing unit is considered as a Decision Making Unit (DMU). The data were collected from Jan. to Dec. 2002 to evaluate the overall efficiency, technical efficiency and scale efficiency. Efficiency reference sets were found to be reference for inefficiency nursing units. The ways to improve the resources inputs or outputs of those inefficient nursing units were suggested using scale variable analysis. The influence of each input and output variable on the relative efficiency were assessed using sensitivity analysis.
The results are summarized as follows:.
1. Overall inefficiency nursing units: A hospital has 4 (20%), B hospital has 5 (20%), both A and B hospitals have 9 (21%), medical nursing units have 5(17%)and surgical nursing units have 7 (47%).
2. Technical inefficiency nursing units: A hospital has 2 (10%), B hospital has 2 (8%), both A and B hospital have 3(6%), medical nursing units has one (3%)and surgical nursing units have 4(27%).
3. Scale inefficiency nursing units: A hospital has 4(20%), B hospital has 5 (20%), both A and B hospital have 9 (21%), medical nursing units have 5 (17%)and surgical nursing units have 7(47%).
4. Relative inefficiency nursing units with overall efficiency value between 0.9-1 belong to the marginal inefficiency units. The overall inefficiency of nursing units due to scale inefficiency.
5. From slack variable analysis the first three inputs needed to reduced are continue education hours, patient care hours and number of full time nurse. The outputs needed to increase are the number of admission and the length of stay.
In this study, the input items to evaluate the efficiency of nursing units are mainly based on the data of clinical productivity. However, the quality of nursing care, the index of patient satisfaction, the number of medical equipments and the standard on patient care activities are not considered. It is highly suggested that service index and equipment allocation should be considered while evaluating efficiency.
|
3 |
Relative efficiency of surface energy budgets over different land coversJanuary 2012 (has links)
abstract: The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse due to relatively high cost and practical difficulties in logistics and deployment. As a result, data is temporally and spatially limited and is inadequate to be used for researches at large scales, such as regional and global climate modeling. Besides field measurements, an alternative way is to estimate turbulent fluxes based on the intrinsic relations between surface energy budget components, largely through thermodynamic equilibrium. These relations, referred as relative efficiency, have been included in several models to estimate the magnitude of turbulent fluxes in surface energy budgets such as latent heat and sensible heat. In this study, three theoretical models based on the lumped heat transfer model, the linear stability analysis and the maximum entropy principle respectively, were investigated. Model predictions of relative efficiencies were compared with turbulent flux data over different land covers, viz. lake, grassland and suburban surfaces. Similar results were observed over lake and suburban surface but significant deviation is found over vegetation surface. The relative efficiency of outgoing longwave radiation is found to be orders of magnitude deviated from theoretic predictions. Meanwhile, results show that energy partitioning process is influenced by the surface water availability to a great extent. The study provides insight into what property is determining energy partitioning process over different land covers and gives suggestion for future models. / Dissertation/Thesis / M.S. Civil and Environmental Engineering 2012
|
4 |
Relative Efficiency of Adjusted and Unadjusted Analyses when Baseline Data are Partially MissingFeng, Yue shan 09 1900 (has links)
<p> Many medical studies are performed to investigate the effectiveness of new treatments (such as new drugs, new surgery) versus traditional (or placebo) treatments. In many cases, researchers measure a continuous variable at baseline and again as an outcome assessed at follow up. The baseline measurement usually has strong relationship with post treatment measurement. Consequently, the ANCOVA model using baseline as covariate may provide more powerful and precise results than the ANOVA model.</p> <p> However, most epidemiologic studies will encounter the problem of missing covariate data. As a result, the patients with missing baseline measurements will be excluded from the data analysis. Hence, there exists a tradeoff between the ANOVA with full data set and the ANCOVA with partial data set.</p> <p> This study focuses on the variance of the estimator of treatment means difference. In practical situation, the standard error of the estimator obtained from the ANCOVA model with partially missing baseline relative to the standard error obtained form the
ANOVA with full data relies on the correlation between baseline and follow-up outcome, the proportion of the missing baseline, and the difference of the group means on the baseline. In moderate sample size studies, it is also affected by the sample size.</p> <p> The theoretically required minimum correlations for the ANCOVA model were calculated to obtain the same precision with the ANOVA model assuming the missing proportion, sample size and difference of group means on covariate are available. The minimum correlation can be obtained through checking the reference table or figures.</p> <p> The figures of asymptotic relative efficiencies provide the asymptotic variance and the length of the confidence intervals of the estimated difference obtained from the ANCOVA model relative to the ANOVA model for all the range of the correlation between baseline and follow up.</p> / Thesis / Master of Science (MSc)
|
5 |
Aspects of Composite Likelihood InferenceJin, Zi 07 March 2011 (has links)
A composite likelihood consists of a combination of valid likelihood objects, and in particular it is of typical interest to adopt lower dimensional marginal likelihoods. Composite marginal likelihood appears to be an attractive alternative for modeling complex data, and has received increasing attention in handling high dimensional data sets when the joint distribution is computationally difficult to evaluate, or intractable due to complex structure of dependence. We present some aspects of methodological development in composite likelihood inference. The resulting estimator enjoys desirable asymptotic properties such as consistency and asymptotic normality. Composite likelihood based test statistics and their asymptotic distributions are summarized. Higher order asymptotic properties of the signed composite likelihood root statistic are explored. Moreover, we aim to compare accuracy and efficiency of composite likelihood estimation relative to estimation based on ordinary likelihood. Analytical and simulation results are presented for different models, which include multivariate normal distributions, times series model, and correlated binary data.
|
6 |
Aspects of Composite Likelihood InferenceJin, Zi 07 March 2011 (has links)
A composite likelihood consists of a combination of valid likelihood objects, and in particular it is of typical interest to adopt lower dimensional marginal likelihoods. Composite marginal likelihood appears to be an attractive alternative for modeling complex data, and has received increasing attention in handling high dimensional data sets when the joint distribution is computationally difficult to evaluate, or intractable due to complex structure of dependence. We present some aspects of methodological development in composite likelihood inference. The resulting estimator enjoys desirable asymptotic properties such as consistency and asymptotic normality. Composite likelihood based test statistics and their asymptotic distributions are summarized. Higher order asymptotic properties of the signed composite likelihood root statistic are explored. Moreover, we aim to compare accuracy and efficiency of composite likelihood estimation relative to estimation based on ordinary likelihood. Analytical and simulation results are presented for different models, which include multivariate normal distributions, times series model, and correlated binary data.
|
7 |
Comparisons of Estimators of Small Proportion under Group TestingWei, Xing 02 July 2015 (has links)
Binomial group testing has been long recognized as an efficient method of estimating proportion of subjects with a specific characteristic. The method is superior to the classic maximum likelihood estimator (MLE), particularly when the proportion is small. Under the group testing model, we assume the testing is conducted without error. In the present research, a new Bayes estimator will be proposed that utilizes an additional piece of information, the proportion to be estimated is small and within a given range. It is observed that with the appropriate choice of the hyper-parameter our new Bayes estimator has smaller mean squared error (MSE) than the classic MLE, Burrows estimator, and the existing Bayes estimator. Furthermore, on the basis of heavy Monte Carlo simulation we have determined the best hyper-parameters in the sense that the corresponding new Bayes estimator has the smallest MSE. A table of these best hyper-parameters is made for proportions within the considered range.
|
8 |
Spatial Allocation, Imputation, and Sampling Methods for Timber Product Output DataBrown, John 10 November 2009 (has links)
Data from the 2001 and 2003 timber product output (TPO) studies for Georgia were explored to determine new methods for handling missing data and finding suitable sampling estimators.
Mean roundwood volume receipts per mill for the year 2003 were calculated using the methods developed by Rubin (1987). Mean receipts per mill ranged from 4.4 to 14.2 million ft3. The mean value of 9.3 million ft3 did not statistically differ from the NONMISS, SINGLE1, and SINGLE2 references means (p=.68, .75, and .76 respectively).
Fourteen estimators were investigated to investigate sampling approaches, with estimators being of several means types (simple random sample, ratio, stratified sample, and combined ratio) as well as employing two methods for stratification (Dalenius-Hodges (DH) square root of the Frequency method and a cluster analysis method. Relative efficiency (RE) improved when the number of groups increased and when employing a ratio estimator, particularly a combined ratio. Neither the DH method nor the cluster analysis method performed better than the other.
Six bound sizes (1, 5, 10, 15, 20, and 25 percent) were considered for deriving samples sizes for the total volume of roundwood. The minimum achievable bound size was found to be 10 percent of the total receipts volume for the DH-method using a two group stratification. This was true for both the stratified and combined ratio estimators. In addition, for the stratified and combined ratio estimators, only the DH method stratifications were able to reach a 10 percent bound on the total (6 of the 12 stratified estimators). The remaining six stratified estimators were able to achieve a 20 percent bound of the total.
Finally, nonlinear repeated measures models were developed to spatially allocate mill receipts to surrounding counties in the event of obtaining only a mill's total receipt volume. A Gompertz model with a power spatial covariance was found to be the best performing when using road distances from the mills to either county center type (geographic or forest mass). These models utilized the cumulative frequency of mill receipts as the response variable, with cumulative frequencies based on distance from the mill to the county. / Ph. D.
|
9 |
A Preliminary Examination of Data Envelopment Analysis for Prioritizing Improvements of a Set of Independent Four Way Signalized Intersections in a RegionKumar, Manjunathan 28 January 2003 (has links)
Evaluation of critical transportation infrastructure and their operation is vital for continuous evolution to meet the growing needs of the society with time. The current practice of evaluating signalized intersections has two steps. The first is to determine the level of service at which the intersection is performing. Level of Service (LOS) is based on the average delay per vehicle that gets past the particular intersection under consideration. The second step is to do a capacity analysis. This considers the number of lanes and other infrastructure related factors and also includes the influence of the control strategies.
The above-described procedure evaluates any one intersection at a time. It is necessary to compare and rank a given set of intersections for planning purposes such as choosing the sites for improvements.
The research work presented in this thesis demonstrates how Data Envelopment Analysis (DEA) can be used as a tool to achieve the purpose of comparing and ranking a given set of comparable intersections. This study elaborates on various ways of representing different characteristics of an intersection. The demonstration has been restricted to four way signalized intersections.
The intersections that were used for demonstration as part of this research were created in a controlled random fashion by simulation. / Master of Science
|
10 |
Determining the late effect parameter in the Fleming-Harrington test using asymptotic relative efficiency in cancer immunotherapy clinical trials / がん免疫治療臨床試験における漸近相対効率を用いたFleming-Harrington検定の遅延した治療効果の検出のパラメータの設定Kaneko, Yuichiro 23 January 2024 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第24998号 / 医博第5032号 / 新制||医||1069(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 佐藤 俊哉, 教授 山本 洋介, 教授 永井 洋士 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
Page generated in 0.0783 seconds