• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 11
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Optimizing Performance Measures in Classification Using Ensemble Learning Methods

January 2017 (has links)
abstract: Ensemble learning methods like bagging, boosting, adaptive boosting, stacking have traditionally shown promising results in improving the predictive accuracy in classification. These techniques have recently been widely used in various domains and applications owing to the improvements in computational efficiency and distributed computing advances. However, with the advent of wide variety of applications of machine learning techniques to class imbalance problems, further focus is needed to evaluate, improve and optimize other performance measures such as sensitivity (true positive rate) and specificity (true negative rate) in classification. This thesis demonstrates a novel approach to evaluate and optimize the performance measures (specifically sensitivity and specificity) using ensemble learning methods for classification that can be especially useful in class imbalanced datasets. In this thesis, ensemble learning methods (specifically bagging and boosting) are used to optimize the performance measures (sensitivity and specificity) on a UC Irvine (UCI) 130 hospital diabetes dataset to predict if a patient will be readmitted to the hospital based on various feature vectors. From the experiments conducted, it can be empirically concluded that, by using ensemble learning methods, although accuracy does improve to some margin, both sensitivity and specificity are optimized significantly and consistently over different cross validation approaches. The implementation and evaluation has been done on a subset of the large UCI 130 hospital diabetes dataset. The performance measures of ensemble learners are compared to the base machine learning classification algorithms such as Naive Bayes, Logistic Regression, k Nearest Neighbor, Decision Trees and Support Vector Machines. / Dissertation/Thesis / Masters Thesis Computer Science 2017
22

IFRS e a divulgação das medidas de desempenho não-GAAP \"EBITDA\" e \" EBITDA Ajustado\" no cenário corporativo brasileiro / IFRS and the disclosure of non-GAAP performance measures \"EBITDA\" and \"Adjusted EBITDA\" in the Brazilian corporate scenario

Gabriela de Souza Vasconcelos 07 December 2017 (has links)
O presente estudo tem por objetivo investigar as características e implicações da divulgação voluntária das medidas de desempenho não-GAAP \"EBITDA\" e \"EBITDA Ajustado\" em relatórios financeiros preparados conforme IFRS no cenário corporativo brasileiro. A principal preocupação relacionada a divulgações voluntárias é se de fato estas informações garantem a qualidade do processo decisório dos usuários. A natureza do estudo é empírico-teórica com abordagem qualitativa e quantitativa. Os dados documentais foram extraídos de três fontes: Thomson Reuters; relatórios anuais e press releases disponibilizados no site de cada companhia; e formulários de referência disponibilizados no site da CVM. A amostra selecionada é a do índice IBrX 100 e os dados analisados são dos períodos trimestrais e anuais de 2014 e 2015. Para coleta das percepções sobre o uso e divulgação das métricas estudadas, aplica-se o questionário semiestruturado com sócios de firmas BigFour. Os principais resultados qualitativos sugerem, em linhas gerais: que o uso e divulgação das medidas estudadas tem ocorrido de forma ampla, consistente e regular; que 79% dos ajustes adicionais efetuados pelas companhias por meio do EBITDA Ajustado são consequência de princípios e regras contábeis vigentes conforme IFRS; que os ajustes adicionais mais comuns efetuados pelas companhias são Impairment, Provisões, Correção de Erros e Equivalência Patrimonial; e que o uso e divulgação das medidas investigadas se faz necessário uma vez que a contabilidade não é suficiente para prover aos seus usuários uma medida que forneça o desempenho isolado da atividade operacional de uma companhia. Pode-se concluir com base nos resultados quantitativos deste estudo que empresas de maior porte, que aderem a níveis de governança corporativa e que possuem receitas líquidas menores, estão mais propensas a divulgar as medidas \"EBITDA\" e \"EBITDA Ajustado\". As evidências advindas deste estudo podem ser úteis para colaborar com a discussão atual de órgãos reguladores e normatizadores ao apontar o papel informativo de medidas alternativas de mensuração, mas não deixando de alertar que estes números necessitam ser acompanhados e fiscalizados pelos entes e instituições cabíveis. / The present study aims to investigate the characteristics and implications of the voluntary disclosure of non-GAAP performance measures \"EBITDA\" and \"Adjusted EBITDA\" in financial reports prepared under IFRS in the Brazilian corporate scenario. The main concern related to voluntary disclosures is whether this information actually guarantees the quality of the user\'s decision-making process. The nature of the study is empirical-theoretical with a qualitative and quantitative approach. Documentary data were extracted from three sources: Thomson Reuters; annual reports and press releases made available on each company\'s website; and reference forms available on the CVM website. The selected sample is the IBrX 100 index and the data analyzed are from the quarterly and annual periods of 2014 and 2015. To collect the perceptions about the use and disclosure of the metrics studied, the semi-structured questionnaire is applied with partners of BigFour firms. The main qualitative results suggest, in general lines: that the use and disclosure of the measures studied has occurred in a broad, consistent and regular way; that 79% of the additional adjustments made by the companies through Adjusted EBITDA are a consequence of accounting principles and rules according to IFRS; that the most common additional adjustments made by the companies are Impairment, Provisions, Errors and Equity Method; and that the use and disclosure of the measures investigated becomes necessary since accounting is not sufficient to provide its users with a measure that provides the isolated performance of a company\'s operating activity. It can be concluded from the quantitative results of this study that larger companies, which adhere to levels of corporate governance and have lower net revenues, are more likely to disclose the measures \"EBITDA\" and \"Adjusted EBITDA\". Evidence from this study may be useful to collaborate with the current discussion of regulators and regulators by pointing out the informative role of alternative measures of measurement, while noting that these figures need to be monitored and monitored by appropriate bodies and institutions.
23

The Effect of Circular Economy on Financial KPIs : A study on Swedish SMEs within the manufacturing industry

Schaumberger, Stefan, Degerstedt, Gabrielle January 2022 (has links)
Circular economy is a topic that has gained a lot of attention during the last decades. Even so, there is still a gap of research at the micro-level regarding how circular economy influences financial performance. This paper aims to investigate if circular economy has a positive impact on financial performance indicators. Furthermore, it explores whether the firm size has an impact on the level of circularity as well as if circularity has an impact on financial performance. Using a sample of Swedish companies, this paper applied the framework of 9Rs to enhance the knowledge of the level of circularity.A survey was sent to 239 SMEs within the manufacturing industry in Sweden to gather information about the expected relation between circular economy and the financial performance. Previous research points out that companies struggle to implement circularity since the systems are not yet developed. This paper cannot confirm the reasons behind the low number of companies with adopted circular processes, which could be investigated further by other researchers. However, it was found that most companies are still focusing on sustainability and only a few companies have implemented circularity in their business model. Furthermore, firm size does not have an impact on the level of circularity which could be due to either that the majority of participating companies is classified as small or that most companies are still linear. At last, the analysis results show that circular economy has a positive influence on the financial KPIs sales, return on assets and economic value added and that the higher level of circularity, the greater the impact.
24

Systemic Network-Level Approaches for Identifying Locations with High Potential for Wet and Hydroplaning Crashes

Velez Rodriguez, Kenneth Xavier 02 September 2021 (has links)
Crashes on wet pavements are responsible for 25% of all crashes and 13.5% of fatal crashes in the US (Harwood et al. 1988). This number represents a significant portion of all crashes. Current methods used by the Department of Transportations (DOTs) are based on wet over dry ratios and simplified approaches to estimate hydroplaning speeds. A fraction of all wet crashes is hydroplaning; although they are related, the difference between a "wet crash" and "hydroplaning" is a wet-crash hydrodynamic-based severity scale is less compared to hydroplaning where the driver loses control. This dissertation presents a new conceptual framework design to reduce wet- and hydroplaning-related crashes by identifying locations with a high risk of crashes using systemic, data-driven, risk-based approaches and available data. The first method is a robust systemic approach to identify areas with a high risk of wet crashes using a negative binomial regression to quantify the relationship between wet to dry ratio (WDR), traffic, and road characteristics. Results indicate that the estimates are more reliable than current methods of WDR used by DOTs. Two significant parameters are grade difference and its absolute value. The second method is a simplified approach to identify areas with a high risk of wet crashes with only crash counts by applying a spatial multiresolution analysis (SMA). Results indicate that SMA performs better than current hazardous-road segments identification (HRSI) methods based on crash counts by consistently identifying sites during several years for selected 0.1 km sections. A third method is a novel systemic approach to identify locations with a high risk of hydroplaning through a new risk-measuring parameter named performance margin, which considers road geometry, environmental condition, vehicle characteristics, and operational conditions. The performance margin can replace the traditional parameter of interest of hydroplaning speed. The hydroplaning risk depends on more factors than those identified in previous research that focuses solely on tire inflation pressure, tire footprint area, or wheel load. The braking and tire-tread parameters significantly affected the performance margin. Highway engineers now incorporate an enhanced tool for hydroplaning risk estimation that allows systemic analysis. Finally, a critical review was conducted to identify existing solutions to reduce the high potential of skidding or hydroplaning on wet pavement. The recommended strategies to help mitigate skidding and hydroplaning are presented to help in the decision process and resource allocation. Geometric design optimization provides a permanent impact on pavement runoff characteristics that reduces the water accumulation and water thickness on the lanes. Road surface modification provides a temporary impact on practical performance and non-engineering measures. / Doctor of Philosophy / Crashes on wet pavements are responsible for 25% of all crashes and 13.5% of fatal crashes in the US (Harwood et al. 1988). This number represents a significant portion of all crashes. Current procedures used by DOTs to identify locations with a high number of wet crashes and hydroplaning are too simple and might not represent actual risk. A fraction of all wet crashes is hydroplaning, although they are related to the difference between a "wet crash" and "hydroplaning" is a wet crash water-vehicle interaction is less compared to hydroplaning where the driver loses control. This dissertation presents a new procedure to evaluate the road network to identify locations with a high risk of wet crashes and hydroplaning. The risk estimation process uses data collected in the field to determine the risk at a particular location and, depending on the available data a transportation agency uses, will be the approach to apply. The first statistical method estimates the frequency of wet crashes at a location. This estimate is developed by using a statistical model, negative binomial regression. This model measures the frequency of dry crashes, wet crashes, traffic, and road characteristics to determine the total number of wet crashes at a location. Results indicate that this option is more reliable than the current methods used by DOTs. They divide the number of wet crashes by the number of dry crashes. Two elements identified to influence the results are the difference in road grade and its absolute value. The second statistical method to estimate wet crashes considers crash counts by applying a statistical process, spatial multiresolution analysis (SMA). Results indicate that SMA performs better than current processes based only on the crash counts. This option can identify the high-risk location for different years, called consistency. The more consistent the method is, the more accurate is the results. A third statistical method is a novel way to estimate hydroplaning risk. Hydroplaning risk is currently based on finding the maximum speed before hydroplaning occurs. A vehicle's performance related to the water-film thickness provides an estimation method developed by (Gallaway et al. 1971), which includes rainfall intensities, road characteristics, vehicle characteristics, and operating conditions. The hydroplaning risk depends on more aspects than tire inflation pressure, tire footprint area, or vehicle load on the wheel. The braking and tire tread affect the performance margin. Highway engineers can use this improved hydroplaning risk-estimation tool to analyze the road network. Finally, a critical review showed the available solutions to reduce the probability of having a wet crash or hydroplaning on wet pavement. The recommended strategies to mitigate wet crashes and hydroplaning provide information to allocate resources based on proven, practical strategies. Road geometry design can be optimized to remove water from the road. This geometry is a permanent modification of pavement characteristics to reduce water accumulation and water thickness on the road. Road surface treatments and non-engineering measures provide temporary measures to improve vehicle performance or driver operation.
25

Evaluating novel hedge fund performance measures under different economic conditions / Francois van Dyk

Van Dyk, Francois January 2014 (has links)
Performance measurement is an integral part of investment analysis and risk management. Investment performance comprises two primary elements, namely; risk and return. The measurement of return is more straightforward compared with the measurement of risk: the latter is stochastic and thus requires more complex computation. Risk and return should, however, not be considered in isolation by investors as these elements are interlinked according to modern portfolio theory (MPT). The assembly of risk and return into a risk-adjusted number is an essential responsibility of performance measurement as it is meaningless to compare funds with dissimilar expected returns and risks by focusing solely on total return values. Since the advent of MPT performance evaluation has been conducted within the risk-return or mean-variance framework. Traditional, liner performance measures, such as the Sharpe ratio, do, however, have their drawbacks despite their widespread use and copious interpretations. The first problem explores the characterisation of hedge fund returns which lead to standard methods of assessing the risks and rewards of these funds being misleading and inappropriate. Volatility measures such as the Sharpe ratio, which are based on mean-variance theory, are generally unsuitable for dealing with asymmetric return distributions. The distribution of hedge fund returns deviates significantly from normality consequentially rendering volatility measures ill-suited for hedge fund returns due to not incorporating higher order moments of the returns distribution. Investors, nevertheless, rely on traditional performance measures to evaluate the risk-adjusted performance of (these) investments. Also, these traditional risk-adjusted performance measures were developed specifically for traditional investments (i.e. non-dynamic and or linear investments). Hedge funds also embrace a variety of strategies, styles and securities, all of which emphasises the necessity for risk management measures and techniques designed specifically for these dynamic funds. The second problem recognises that traditional risk-adjusted performance measures are not complete as they do not implicitly include or measure all components of risk. These traditional performance measures can therefore be considered one dimensional as each measure includes only a particular component or type of risk and leaves other risk components or dimensions untouched. Dynamic, sophisticated investments – such as those pursued by hedge funds – are often characterised by multi-risk dimensionality. The different risk types to which hedge funds are exposed substantiates the fact that volatility does not capture all inherent hedge fund risk factors. Also, no single existing measure captures the entire spectrum of risks. Therefore, traditional risk measurement methods must be modified, or performance measures that consider the components (factors) of risk left untouched (unconsidered) by the traditional performance measures should be considered alongside traditional performance appraisal measures. Moreover, the 2007-9 global financial crisis also set off an essential debate of whether risks are being measured appropriately and, in-turn, the re-evaluation of risk analysis methods and techniques. The need to continuously augment existing and devise new techniques to measure financial risk are paramount given the continuous development and ever-increasing sophistication of financial markets and the hedge fund industry. This thesis explores the named problems facing modern financial risk management in a hedge fund portfolio context through three objectives. The aim of this thesis is to critically evaluate whether the novel performance measures included provide investors with additional information, to traditional performance measures, when making hedge fund investment decisions. The Sharpe ratio is taken as the primary representative of traditional performance measures given its widespread use and also for being the hedge fund industry’s performance metric of choice. The objectives have been accomplished through the modification, altered use or alternative application of existing risk assessment techniques and through the development of new techniques, when traditional or older techniques proved to be inadequate. / PhD (Risk Management), North-West University, Potchefstroom Campus, 2014
26

Evaluating novel hedge fund performance measures under different economic conditions / Francois van Dyk

Van Dyk, Francois January 2014 (has links)
Performance measurement is an integral part of investment analysis and risk management. Investment performance comprises two primary elements, namely; risk and return. The measurement of return is more straightforward compared with the measurement of risk: the latter is stochastic and thus requires more complex computation. Risk and return should, however, not be considered in isolation by investors as these elements are interlinked according to modern portfolio theory (MPT). The assembly of risk and return into a risk-adjusted number is an essential responsibility of performance measurement as it is meaningless to compare funds with dissimilar expected returns and risks by focusing solely on total return values. Since the advent of MPT performance evaluation has been conducted within the risk-return or mean-variance framework. Traditional, liner performance measures, such as the Sharpe ratio, do, however, have their drawbacks despite their widespread use and copious interpretations. The first problem explores the characterisation of hedge fund returns which lead to standard methods of assessing the risks and rewards of these funds being misleading and inappropriate. Volatility measures such as the Sharpe ratio, which are based on mean-variance theory, are generally unsuitable for dealing with asymmetric return distributions. The distribution of hedge fund returns deviates significantly from normality consequentially rendering volatility measures ill-suited for hedge fund returns due to not incorporating higher order moments of the returns distribution. Investors, nevertheless, rely on traditional performance measures to evaluate the risk-adjusted performance of (these) investments. Also, these traditional risk-adjusted performance measures were developed specifically for traditional investments (i.e. non-dynamic and or linear investments). Hedge funds also embrace a variety of strategies, styles and securities, all of which emphasises the necessity for risk management measures and techniques designed specifically for these dynamic funds. The second problem recognises that traditional risk-adjusted performance measures are not complete as they do not implicitly include or measure all components of risk. These traditional performance measures can therefore be considered one dimensional as each measure includes only a particular component or type of risk and leaves other risk components or dimensions untouched. Dynamic, sophisticated investments – such as those pursued by hedge funds – are often characterised by multi-risk dimensionality. The different risk types to which hedge funds are exposed substantiates the fact that volatility does not capture all inherent hedge fund risk factors. Also, no single existing measure captures the entire spectrum of risks. Therefore, traditional risk measurement methods must be modified, or performance measures that consider the components (factors) of risk left untouched (unconsidered) by the traditional performance measures should be considered alongside traditional performance appraisal measures. Moreover, the 2007-9 global financial crisis also set off an essential debate of whether risks are being measured appropriately and, in-turn, the re-evaluation of risk analysis methods and techniques. The need to continuously augment existing and devise new techniques to measure financial risk are paramount given the continuous development and ever-increasing sophistication of financial markets and the hedge fund industry. This thesis explores the named problems facing modern financial risk management in a hedge fund portfolio context through three objectives. The aim of this thesis is to critically evaluate whether the novel performance measures included provide investors with additional information, to traditional performance measures, when making hedge fund investment decisions. The Sharpe ratio is taken as the primary representative of traditional performance measures given its widespread use and also for being the hedge fund industry’s performance metric of choice. The objectives have been accomplished through the modification, altered use or alternative application of existing risk assessment techniques and through the development of new techniques, when traditional or older techniques proved to be inadequate. / PhD (Risk Management), North-West University, Potchefstroom Campus, 2014
27

Theory of Constraints for Publicly Funded Health Systems

Sadat, Somayeh 28 September 2009 (has links)
This thesis aims to fill the gaps in the literature of the theory of constraints (TOC) in publicly funded health systems. While TOC seems to be a natural fit for this resource-constrained environment, there are still no reported application of TOC’s drum-buffer-rope tool and inadequate customizations with regards to defining system-wide goal and performance measures. The “Drum-Buffer-Rope for an Outpatient Cancer Facility” chapter is a real world case study exploring the usefulness of TOC’s drum-buffer-rope scheduling technique in a publicly funded outpatient cancer facility. With the use of a discrete event simulation model populated with historical data, the drum-buffer-rope scheduling policy is compared against “high constraint utilization” and “low wait time” scenarios. Drum-buffer-rope proved to be an effective mechanism in balancing the inherent tradeoff between the two performance measures of instances of delayed treatment and average patient wait time. To find the appropriate level of compromise in one performance measure in favor of the other, the linkage of these measures to system-wide performance measures are proposed. In the “Theory of Constraints’ Performance Measures for Publicly Funded Health Systems” chapter, a system dynamics representation of the classical TOC’s system-wide goal and performance measures for publicly traded for-profit companies is developed, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC’s process of ongoing improvement in publicly funded health systems. The “Connecting Low-Level Performance Measures to the Goal” chapter attempts to provide a framework to link the low-level performance measures with system-wide performance measures. It is claimed that until such a linkage is adequately established, TOC has not been fully transferred to publicly funded health systems.
28

The Money-Moving Syndrome and the Effectiveness of Foreign Aid

Monkam, Nara Françoise Kamo 13 May 2008 (has links)
This dissertation examines in depth one of the potential causes of the low performance of foreign aid; in particular, the role incentive structures within international donor agencies could play in leading to “a push” to disburse money. This pressure to disburse money is termed as the “Money-Moving Syndrome”. In this dissertation, the “Money Moving Syndrome” exists when the quantity of foreign aid committed or disbursed becomes, in itself, an important objective side by side or above the effectiveness of aid. The theoretical analysis relies on the principal-agent theory to explore how donor agencies’ institutional incentive systems may affect the characteristics of an optimal and efficient incentive contract and thus give rise to the “Money-Moving Syndrome”. We adapted the basic framework developed in Baker (1992) to fit the organizational settings of international development agencies. The model concludes that the extent to which a performance measure based the amount of aid allocated within a specific period of time would lead to the “Money-Moving Syndrome” and affect aid effectiveness depends on the level of institutional imperatives for survival and growth, the degree of aid agency’s accountability for effectiveness, the level of corruption in recipient countries and the degree of difficulty to evaluate development activities. Due to data unavailability regarding other bilateral and multilateral aid agencies, the empirical framework tests several predictions of the theoretical model by examining whether money moving incentives affect World Bank’s decisions regarding project loan size in developing countries. Overall, the empirical results suggest that there seems to be some degree of “Money-Moving Syndrome” in effect within the World Bank.
29

A Business Process Performance Measure Definition System Supported By Information Technologies

Alpay Koc, Nurcan 01 January 2013 (has links) (PDF)
There is a growing interest and research on improvement of business processes as an essential part of effective quality management. Process improvement is possible with measurement and analysis of the process performance. Process performance measurement has been studied to a certain extend in the literature, and many different approaches have been developed such as Sink-Tuttle Model, Performance Measurement Matrix, SMART Pyramid, Balanced Scorecard Approach, Critical Few Method, and Performance Prism Framework. These approaches require that process owners and analysts define appropriate measures based on general guidelines for each process separately. Recently, with the advancement of information technologies, modeling and simulation of processes on a computer aided platform has become possible / standards and software support regarding such applications have been developed. Even though increasingly many organizations have been building their process models on computers, only a few manages effective use of such models for process improvement. This is partly due to difficulties in defining appropriate performance measures for the processes. The purpose of this study is to propose a method that can be used for defining performance measures of business processes easily and effectively according to specific nature of these processes. The proposed performance measure definition system is based on the idea of using generic process performance measures published by trusted business process frameworks for high level processes and adapting them for lower level ones. The system, using a search mechanism available on a computer, allows users to easily find and define appropriate performance measures for their processes. The proposed system is used for a research project management process and a creating research opportunities process of a public university and the results are discussed.
30

Incorporating Sustainability into Transportation Planning and Decision Making: Definitions, Performance Measures, and Evaluation

Jeon, Mihyeon Christy 14 November 2007 (has links)
An increasing number of agencies have begun to define sustainability for transportation systems and are taking steps to incorporate the concept into the regional transportation planning process. Planning for sustainable transportation systems should at the very least incorporate their broader impacts on system effectiveness, environmental integrity, economic development, and the social quality of life. This study reviews definitions, performance measures, and evaluation methodologies for transportation system sustainability and demonstrates a framework for incorporating sustainability considerations in transportation planning and decision making. Through a case study using data from the Atlanta Metropolitan Region, the study evaluates competing transportation and land use plans based on a broad range of sustainability parameters using relevant spatial and environmental analyses. A multiple criteria decision making (MCDM) method enables the aggregation of individual performance measures into four basic indexes and further into a composite sustainability index based on regional goals and priorities. The value of the indexes lies in their ability to capture the multidimensional nature of sustainability as well as important tradeoffs among the potentially conflicting decision criteria. A decision support tool is proposed to visualize dominance and tradeoffs when evaluating alternatives and to effectively reflect changing regional priorities over time. The proposed framework should help decision makers with incorporating sustainability considerations into transportation planning as well as identifying superior plans for predetermined objectives.

Page generated in 0.0554 seconds