Spelling suggestions: "subject:"data envelopment 2analysis."" "subject:"data envelopment 3analysis.""
81 |
Methods in productivity and efficiency analysis with applications to warehousingJohnson, Andrew 31 March 2006 (has links)
A set of technical issues are addressed related to benchmarking best practice behavior in warehouses. In order to identify best practice, first performance needs to be measured. There are a variety of tools available to measure productivity and efficiency. One of the most common tools is data envelopment analysis (DEA). Given a system that consumes inputs to generate outputs, previous work has shown production theory can be used to develop basic postulates about the production possibility space and to construct an efficient frontier which is used to quantify efficiency. Beyond inputs and outputs warehouses typically have practices (techniques used in the warehouse) or attributes (characteristics of the environment of the warehouse including demand characteristics) which also influence efficiency. Previously in the literature, a two-stage method has been developed to investigate the impact of practices and attributes on efficiency. When applying this method, two issues arose: how to measure efficiency in small samples and how to identify outliers. The small sample efficiency measurement method developed in this thesis is called multi-input / multi-output quantile based approach (MQBA) and uses deleted residuals to estimate efficiency. The outlier detection method introduces the inefficient frontier. Both overly efficient and overly inefficient outliers can be identified by constructing an efficient and an inefficient frontier. The outlier detection method incorporates an iterative procedure previously described, but has not been implemented in the literature. Further, this thesis also discusses issues related to selecting an orientation in super efficiency models. Super efficiency models are used in outlier detection, but are also commonly used in measuring technical progress via the Malmquist index. These issues are addressed using two data sets recently collected in the warehousing industry. The first data set consists of 390 observations of various types of warehouses. The other data set has 25 observations from a specific industry. For both data sets, it is shown that significantly different results are realized if the methods suggested in this document are adopted.
|
82 |
Design and performance evaluation of failure prediction modelsMousavi Biouki, Seyed Mohammad Mahdi January 2017 (has links)
Prediction of corporate bankruptcy (or distress) is one of the major activities in auditing firms’ risks and uncertainties. The design of reliable models to predict distress is crucial for many decision-making processes. Although a variety of models have been designed to predict distress, the relative performance evaluation of competing prediction models remains an exercise that is unidimensional in nature. To be more specific, although some studies use several performance criteria and their measures to assess the relative performance of distress prediction models, the assessment exercise of competing prediction models is restricted to their ranking by a single measure of a single criterion at a time, which leads to reporting conflicting results. The first essay of this research overcomes this methodological issue by proposing an orientation-free super-efficiency Data Envelopment Analysis (DEA) model as a multi-criteria assessment framework. Furthermore, the study performs an exhaustive comparative analysis of the most popular bankruptcy modelling frameworks for UK data. Also, it addresses two important research questions; namely, do some modelling frameworks perform better than others by design? and to what extent the choice and/or the design of explanatory variables and their nature affect the performance of modelling frameworks? Further, using different static and dynamic statistical frameworks, this chapter proposes new Failure Prediction Models (FPMs). However, within a super-efficiency DEA framework, the reference benchmark changes from one prediction model evaluation to another one, which in some contexts might be viewed as “unfair” benchmarking. The second essay overcomes this issue by proposing a Slacks-Based Measure Context-Dependent DEA (SBM-CDEA) framework to evaluate the competing Distress Prediction Models (DPMs). Moreover, it performs an exhaustive comparative analysis of the most popular corporate distress prediction frameworks under both a single criterion and multiple criteria using data of UK firms listed on London Stock Exchange (LSE). Further, this chapter proposes new DPMs using different static and dynamic statistical frameworks. Another shortcoming of the existing studies on performance evaluation lies in the use of static frameworks to compare the performance of DPMs. The third essay overcomes this methodological issue by suggesting a dynamic multi-criteria performance assessment framework, namely, Malmquist SBM-DEA, which by design, can monitor the performance of competing prediction models over time. Further, this study proposes new static and dynamic distress prediction models. Also, the study addresses several research questions as follows; what is the effect of information on the performance of DPMs? How the out-of-sample performance of dynamic DPMs compares to the out-of-sample performance of static ones? What is the effect of the length of training sample on the performance of static and dynamic models? Which models perform better in forecasting distress during the years with Higher Distress Rate (HDR)? On feature selection, studies have used different types of information including accounting, market, macroeconomic variables and the management efficiency scores as predictors. The recently applied techniques to take into account the management efficiency of firms are two-stage models. The two-stage DPMs incorporate multiple inputs and outputs to estimate the efficiency measure of a corporation relative to the most efficient ones, in the first stage, and use the efficiency score as a predictor in the second stage. The survey of the literature reveals that most of the existing studies failed to have a comprehensive comparison between two-stage DPMs. Moreover, the choice of inputs and outputs for DEA models that estimate the efficiency measures of a company has been restricted to accounting variables and features of the company. The fourth essay adds to the current literature of two-stage DPMs in several respects. First, the study proposes to consider the decomposition of Slack-Based Measure (SBM) of efficiency into Pure Technical Efficiency (PTE), Scale Efficiency (SE), and Mix Efficiency (ME), to analyse how each of these measures individually contributes to developing distress prediction models. Second, in addition to the conventional approach of using accounting variables as inputs and outputs of DEA models to estimate the measure of management efficiency, this study uses market information variables to calculate the measure of the market efficiency of companies. Third, this research provides a comprehensive analysis of two-stage DPMs through applying different DEA models at the first stage – e.g., input-oriented vs. output oriented, radial vs. non-radial, static vs. dynamic, to compute the measures of management efficiency and market efficiency of companies; and also using dynamic and static classifier frameworks at the second stage to design new distress prediction models.
|
83 |
Ökad produktivitet vid nyproduktion av flerbostadshus : En implementering av Data Envelopment Analysis i byggprojekt, modellering av ett standardiserat arbetssätt vid uppbyggnad och beklädnad av innerväggar samt konkreta förbättringsförslag utifrån Lean ConstructionLindgren, Cecilia, Widgren, Hanna January 2020 (has links)
Productivity growth in the construction industry is low compared to other industries, and a large unnecessary part of a construction worker’s workday is taken up by nonvalue-creating work, such as waiting, or shifting and handling of building materials. Low productivity in construction projects makes it difficult to keep to schedules and can entail costs to construction companies when delays occur. This project aims to evaluate and investigate, on behalf of Skanska Hus Ume˚a, how productivity in the construction of new apartment buildings can be improved. To carry out this assignment, a large number of qualitative and quantitative interviews have been conducted with employees at Skanska. A survey study has also been sent out to investigate how productivity in selected parts of the construction process is experienced by employees at Skanska. The interviews and the survey study have been used as a starting point for the development of mathematical models with the aim of improving the productivity at Skanska Hus in Ume˚a’s construction sites. The project has resulted in a number of proposals for improvements that can be implemented in Skanska’s current working methods to increase productivity at their construction sites. An implementation of the Data Envelopment Analysis optimization model has identified technical effective and ineffective elements in the construction process, indicating which areas Skanska Hus in Ume˚a should focus on improving. Based on these areas, concrete proposals for new working methods have been prepared and presented based on principles and tools within Lean Construction. The degree project has also resulted in a calculation program that standardizes the working method for the construction of interior walls, resulting in increased productivity as well as time and cost savings. Finally, a model for weekly production follow-up has been created to increase employee engagement, ensure high productivity throughout the construction process and create conditions for future productivity development through data collection. / Produktivitetsutvecklingen i byggbranschen är låg jämfört med andra branscher och en stor del av en yrkesarbetares arbetsdag utgörs av icke värdeadderande arbete, som till exempel väntan, förflyttning och onödig materialhantering. Låg produktivitet inom byggprojekt gör det svårt att hålla uppsatta tidplaner och innebär därmed en kostnad för både byggföretag och kunder när förseningar uppstår. På uppdrag av Skanska Hus i Umeå har detta examensarbete syftat till att utvärdera och undersöka hur produktiviteten vid nyproduktion av flerbostadshus kan förb¨attras. För att besvara problemställningarna har ett stort antal kvalitativa och kvantitativa intervjuer genomförts med anställda på Skanska. En enkätstudie har även skickats ut för att undersöka hur utvalda moment i byggprocessen upplevs av medarbetare på Skanska. Intervjuerna och enkätstudien har använts som underlag vid framtagandet av matematiska modeller med syftet att öka produktiviteten på Skanska Hus i Umeås byggarbetsplatser. Examensarbetet har resulterat i en implementering av optimeringsmodellen Data Envelopment Analysis vilken har identifierat tekniskt effektiva och ineffektiva moment i byggprocessen. Resultatet från Data Envelopment Analysis indikerar vilka områden Skanska Hus i Umeå bör fokusera på att förbättra. Utifrån dessa områden har konkreta förslag på nya arbetssätt tagits fram och presenteras utifrån principer och verktyg inom Lean Construction. Examensarbetet har även resulterat i ett beräkningsprogram som standardiserar arbetssättet vid uppbyggnad och beklädnad av innerväggar vilket ger en ökad produktivitet samt tid- och kostnadsbesparingar. Slutligen har en modell för veckovis produktionsuppföljning skapats för att öka engagemanget hos medarbetarna, säkerställa en hög produktivitet genom hela byggprocessen samt skapa förutsättningar för framtida produktivitetsutveckling genom insamling av data.
|
84 |
Efficiency measurement : a methodological comparison of parametric and non-parametric approachesZheng, Wanyu January 2013 (has links)
The thesis examines technical efficiency using frontier efficiency estimation techniques from parametric and non-parametric approaches. Five different frontier efficiency estimation techniques are considered which are SFA, DFA, DEA-CCR, DEA-BCC and DEA-RAM. These techniques are then used on an artificially generated panel dataset using a two-input two-output production function framework based on characteristics of German life-insurers. The key contribution of the thesis is firstly, a study that uses simulated panel dataset to estimate frontier efficiency techniques and secondly, a research framework that compares multiple frontier efficiency techniques across parametric and non-parametric approaches in the context of simulated panel data. The findings suggest that, as opposed to previous studies, parametric and non-parametric approaches can both generate comparable technical efficiency scores with simulated data. Moreover, techniques from parametric approaches, i.e. SFA and DFA are consistent with each other whereas the same applies to non-parametric approaches, i.e. DEA models. The research study also discusses some important theoretical and methodological implication of the findings and suggests some ways whereby future research can enable to overcome some of the restrictions associated with current approaches.
|
85 |
Cost efficiency in the Chinese banking sector : a comparison of parametric and non-parametric methodologiesDong, Yizhe January 2010 (has links)
Since the open door policy was embarked upon in 1979, China s banking sector has undergone gradual but notable reforms. A key objective of the reforms implemented by the Chinese government is to build an effective, competitive and stable banking system in order to improve its efficiency and reliability. This study employs both parametric stochastic frontier analysis (SFA) and non-parametric data envelopment analysis (DEA) methods to assess and evaluate the cost efficiency of Chinese banks over the period from 1994 until 2007, a period characterised by far-reaching changes brought about by the banking reforms. To this end, we first compare a number of specifications of stochastic cost frontier models to determine the preferred frontier model which are adopted in our efficiency analysis. The preferred model specification for our sample is the one stage SFA model that includes the traditional input prices, the outputs and the control variables (that is, equity, non-performing loans and the time trend) in the cost frontier and the environmental variables (that is, ownership structure, size, deregulation, market structure and market discipline) in the inefficiency term. Moreover, we also employ two cost DEA models (traditional DEA and New DEA) as a complement to the preferred SFA model for methodological cross-checking purposes. Similar to the previous empirical literature, we find that in most cases only moderate consistency across the different techniques. The cost efficiency of Chinese banks is found to be 91% on average, based on our SFA model, over the period from 1994 until 2007. Based on the results of the DEA and New DEA models, the average cost efficiency for Chinese banks over the sample period is about 89% and 87%, respectively. We find that Chinese banking efficiency has deteriorated after China s admission to the WTO, suggesting that the significant external environmental changes which arose from China s WTO entry may have had a negative impact on its banking efficiency. In addition, we find that the majority of Chinese banks reveal scale inefficiencies and as asset size increases, banks tend to pass from increasing, to constant, and then to decreasing returns to scale. Our findings also show that both state-owned banks and foreign banks are more efficient than domestic private banks and larger banks tend to be relatively more efficient than smaller banks. These and other results suggest that in order to enhance Chinese banking efficiency, the government needs to continue with the banking reform process and in particular, to open up banking markets, to improve risk management and corporate governance in Chinese banks and to encourage the expansion of banks.
|
86 |
Improving clinical efficiency of military treatment facilitiesPiner, Thomas J. 09 1900 (has links)
The Department of Defense is facing medical expenses that are growing at an unprecedented rate. The top leadership is looking for ways to reduce costs and improve efficiency while still providing world class medical care for its beneficiaries. One option is to implement a relatively new tool called Data Envelopment Analysis (DEA). This tool uses linear programming to identify efficient entities, called decision making units (DMU), relative to the other entities in the set. In the past, DEA studies used military hospitals as DMUs. This study is different in that it uses clinics within hospitals as DMUs. The rational behind this is that administrators have difficulty using data that tells them in general terms that they have too many people or are spending too much money. What they need is a tool that tells them where there are too many people or where they are spending too much money. A hospital is made up of clinics so it is intuitive to begin by improving the efficiency of the clinics which in turn will improve the efficiency of the whole hospital.
|
87 |
Data envelopment analysis with sparse dataGullipalli, Deep Kumar January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / David H. Ben-Arieh / Quest for continuous improvement among the organizations and issue of missing data for data analysis are never ending. This thesis brings these two topics under one roof, i.e., to evaluate the productivity of organizations with sparse data. This study focuses on Data Envelopment Analysis (DEA) to determine the efficiency of 41 member clinics of Kansas Association of Medically Underserved (KAMU) with missing data. The primary focus of this thesis is to develop new reliable methods to determine the missing values and to execute DEA.
DEA is a linear programming methodology to evaluate relative technical efficiency of homogenous Decision Making Units, using multiple inputs and outputs. Effectiveness of DEA depends on the quality and quantity of data being used. DEA outcomes are susceptible to missing data, thus, creating a need to supplement sparse data in a reliable manner. Determining missing values more precisely improves the robustness of DEA methodology.
Three methods to determine the missing values are proposed in this thesis based on three different platforms. First method named as Average Ratio Method (ARM) uses average value, of all the ratios between two variables. Second method is based on a modified Fuzzy C-Means Clustering algorithm, which can handle missing data. The issues associated with this clustering algorithm are resolved to improve its effectiveness. Third method is based on interval approach. Missing values are replaced by interval ranges estimated by experts. Crisp efficiency scores are identified in similar lines to how DEA determines efficiency scores using the best set of weights.
There exists no unique way to evaluate the effectiveness of these methods. Effectiveness of these methods is tested by choosing a complete dataset and assuming varying levels of data as missing. Best set of recovered missing values, based on the above methods, serves as a source to execute DEA. Results show that the DEA efficiency scores generated with recovered values are close within close proximity to the actual efficiency scores that would be generated with the complete data.
As a summary, this thesis provides an effective and practical approach for replacing missing values needed for DEA.
|
88 |
Modely vícekriteriálního rozhodování v analýze obalu dat / Multi-Criteria Decision Making in Data Envelopment AnalysisMec, Martin January 2009 (has links)
Data Envelopment Analysis is a multi-criteria decision making tool employing set of minimizing criteria (inputs) and set of maximizing criteria (outputs) for evaluating decision making unit efficiency. This method is accompanied by problems in field of input and output weight assignment, whereas benevolent basic model formulation enables decision making unit evaluation to be based on far unevenly distributed weight vector. Furthermore, data envelopment analysis basic model produces dichotomized dividing in form of efficient and inefficient decision making unit sets. Extensive set of efficient units occurs frequently and this causes difficulties in choosing one or less number efficient units. These phenomena appear often simultaneously. An implementation of multi-criteria decision making models into data envelopment analysis is exercised in order to reduce these undesired effects in applications.
|
89 |
[en] EFFICIENCY EVALUATION OF LODGING ESTABLISHMENTS USING DEA: A CASE STUDY IN CAMPOS, RJ / [pt] AVALIAÇÃO DA EFICIÊNCIA DE ESTABELECIMENTOS DE HOSPEDAGEM USANDO DEA: UM ESTUDO DE CASO EM CAMPOS, RJNINA AMELIA CHARTUNI CABRAL DA CRUZ 07 January 2014 (has links)
[pt] O setor turístico tem apresentado grande importância econômica e impulsionado
o crescimento dos estabelecimentos de hospedagem no Brasil. Esses estabelecimentos
buscam se instalar em regiões reconhecidamente turísticas e com
potencial para o turismo de negócios. Neste trabalho, os estabelecimentos de hospedagem
do município Campos dos Goytacazes, RJ, foram avaliados quanto a sua
eficiência com uso da técnica de Análise por Envoltória de Dados (DEA) segundo
a perspectiva dos clientes. Os valores dos outputs e inputs foram calculados a
partir de observações contidas nas homepages dos estabelecimentos. Os modelos
matemáticos CCR e BCC orientados a outputs foram resolvidos usando o pacote
AIMMS para obter a eficiência de cada estabelecimento de hospedagem. Ao ordenar
as unidades produtivas de acordo com o índice de eficiência, os estabelecimentos
de pequeno porte e particulares tem a oportunidade de tomar como referência
aqueles estabelecimentos que apresentaram eficiência máxima, e assim
identificar e introduzir as melhorias necessárias de modo a não perderem mercado
para os estabelecimentos de redes hoteleiras. Os resultados obtidos com DEA respondem
ao grande questionamento desta pesquisa: os estabelecimentos de hospedagem
de gestão familiar de Campos, RJ, precisam aprimorar seu desempenho,
em relação aos serviços oferecidos a seus hóspedes, para se manterem competitivos
no mercado diante do atual cenário em que se encontra o país? / [en] The tourism sector has shown great economic importance and stimulated
the growth of lodging establishments in Brazil. These establishments seek to
locate in regions with known potential for tourism and for business. In this work,
the lodging establishments of the municipality of Campos dos Goytacazes, RJ, are
evaluated in respect to their efficiency from the perspective of customers, using
the technique of Data Envelopment Analysis (DEA). The input and output data
were computed based on data collected from the establishments’ websites. The
mathematical models CCR and BCC oriented to outputs were solve dusing the
package AIMMS to get the efficiency of each facility hosting. By ordering the
lodging establishments according to their efficiency scores, the small
establishments, in general private, have the opportunity to refer to those
establishments with maximum efficiency, and then identify and make the
necessary improvements to stay competitive and to not lose market for the
establishments of hotel chains. The results obtained with DEA respond to the
great challenge of this research: the family-running lodging establishments of
Campos, RJ, need to improve their performance, in relation to the service offered
to their customers, in order to remain competitive in the market facing the current
country scenario?
|
90 |
A Cost Efficiency Comparison of International Corn, Soybean, and Wheat ProductionRachel Purdy (6639149) 14 May 2019 (has links)
This paper seeks to compare production costs of
similar farms to determine competitiveness across countries. A data envelopment analysis (DEA) approach was
used to calculate efficiency indices for farms producing corn, soybeans, wheat,
both corn and soybeans, and both corn and wheat. Technical efficiency, allocative efficiency,
and cost efficiency were compared for all farms. The data consisted of a five-year (2013-2017)
panel of 24 corn-producing farms, 15 soybean-producing farms, 38
wheat-producing farms, 13 farms producing both corn and soybeans, and 17 farms
producing both corn and wheat. The <i>agri benchmark</i> network at the Thünen
Institute (TI) of Farm Economics manages the dataset that was used in this
analysis. Outputs were measured using
revenue. Input costs included direct
costs, operating costs, and overhead costs.<br>
|
Page generated in 0.1068 seconds