• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Prognóstico das variáveis meteorológicas e da evapotranspiração de referência com o modelo de previsão do tempo GFS/NCEP / Prediction of meteorological variables and reference evapotranspiration with GFS/NCEP weather forecast model

Oliveira Filho, Celso Luís de 31 July 2007 (has links)
Avaliou-se o desempenho de um modelo numérico de previsão do tempo (GFS - Global Forecast System – antigo AVN – AViatioN model - do Centro Nacional para Previsão Ambiental – NCEP) no prognóstico de variáveis meteorológicas temperatura, déficit de pressão de vapor do ar, saldo de radiação e velocidade do vento, e da evapotranspiração de referência calculada pelos métodos de Thornthwaite (1948) e de Penman-Monteith (Allen et al., 1998). O desempenho foi avaliado por comparação com dados provenientes de uma estação meteorológica, situada em Piracicaba, São Paulo. A temperatura e o déficit de pressão de vapor do ar foram os elementos melhor prognosticados, com desempenho "muito bom" e "bom", de acordo com o índice de desempenho proposto por Camargo e Sentelhas (1997), para no máximo quatro e três dia de antecedência, respectivamente, durante o período seco. Para o período úmido, somente o prognóstico do déficit de pressão de vapor do ar para o primeiro dia mostrou-se "bom". Os prognósticos de saldo de radiação e velocidade do vento foram ruins para ambos os períodos. Em decorrência do bom desempenho do modelo para prognosticar a temperatura, verificou-se que a estimativa de ETo pelo método de Thornthwaite teve boa concordância com o calculado a partir dos dados da estação meteorológica, com antecedência de até três dias para o período seco. Para o úmido, este fato foi observado apenas para o primeiro dia de antecedência. A concordância entre os valores estimados pelo modelo e a partir da estação para o método de Penman-Monteith foi muito baixa, em conseqüência do desempenho do modelo de previsão do tempo em prognosticar o saldo de radiação e a velocidade do vento. / The performance of a numeric weather forecast model (GFS- Forecast System, former AVN - AvatioN model, National Center for Environmental Prediction-NCEP) was evaluated for predicting weather variables, like air temperature and vapor pressure deficit, net radiation and wind speed, as well as reference evapotranspiration calculated by Thornthwaite (1948) and Penman-Monteith (Allen et al., 1948) methods, by the comparison with data obtained by an automatic weather station, in Piracicaba, State of São Paulo, Brazil. Temperature and vapor pressure deficit were the variables predicted with the best accuracy, with a "very good" and "good" performance, according to the index of confidence proposed by Camargo and Sentelhas (1997), for the maximum of four and three days in advance, respectively, during the dry season. For the wet season, only vapor pressure deficit was predicted with a "good" performance of the model. The predictions of net radiation and wind speed were very poor for both seasons. As the weather forecast model predicted temperature well, ETo estimated by Thornthwaite method showed a good agreement with ETo values estimated by observed data from the weather station, with till three days in advance for the dry season. For the wet season, such agreement was observed just for one day in advance. When ETo estimated by Penman-Monteith method with data from the weather forecast model and from weather station were compared any agreement was observed, which was caused by the poor performance of the numeric weather forecast model to predict net radiation and wind speed.
12

Prognóstico das variáveis meteorológicas e da evapotranspiração de referência com o modelo de previsão do tempo GFS/NCEP / Prediction of meteorological variables and reference evapotranspiration with GFS/NCEP weather forecast model

Celso Luís de Oliveira Filho 31 July 2007 (has links)
Avaliou-se o desempenho de um modelo numérico de previsão do tempo (GFS - Global Forecast System – antigo AVN – AViatioN model - do Centro Nacional para Previsão Ambiental – NCEP) no prognóstico de variáveis meteorológicas temperatura, déficit de pressão de vapor do ar, saldo de radiação e velocidade do vento, e da evapotranspiração de referência calculada pelos métodos de Thornthwaite (1948) e de Penman-Monteith (Allen et al., 1998). O desempenho foi avaliado por comparação com dados provenientes de uma estação meteorológica, situada em Piracicaba, São Paulo. A temperatura e o déficit de pressão de vapor do ar foram os elementos melhor prognosticados, com desempenho "muito bom" e "bom", de acordo com o índice de desempenho proposto por Camargo e Sentelhas (1997), para no máximo quatro e três dia de antecedência, respectivamente, durante o período seco. Para o período úmido, somente o prognóstico do déficit de pressão de vapor do ar para o primeiro dia mostrou-se "bom". Os prognósticos de saldo de radiação e velocidade do vento foram ruins para ambos os períodos. Em decorrência do bom desempenho do modelo para prognosticar a temperatura, verificou-se que a estimativa de ETo pelo método de Thornthwaite teve boa concordância com o calculado a partir dos dados da estação meteorológica, com antecedência de até três dias para o período seco. Para o úmido, este fato foi observado apenas para o primeiro dia de antecedência. A concordância entre os valores estimados pelo modelo e a partir da estação para o método de Penman-Monteith foi muito baixa, em conseqüência do desempenho do modelo de previsão do tempo em prognosticar o saldo de radiação e a velocidade do vento. / The performance of a numeric weather forecast model (GFS- Forecast System, former AVN - AvatioN model, National Center for Environmental Prediction-NCEP) was evaluated for predicting weather variables, like air temperature and vapor pressure deficit, net radiation and wind speed, as well as reference evapotranspiration calculated by Thornthwaite (1948) and Penman-Monteith (Allen et al., 1948) methods, by the comparison with data obtained by an automatic weather station, in Piracicaba, State of São Paulo, Brazil. Temperature and vapor pressure deficit were the variables predicted with the best accuracy, with a "very good" and "good" performance, according to the index of confidence proposed by Camargo and Sentelhas (1997), for the maximum of four and three days in advance, respectively, during the dry season. For the wet season, only vapor pressure deficit was predicted with a "good" performance of the model. The predictions of net radiation and wind speed were very poor for both seasons. As the weather forecast model predicted temperature well, ETo estimated by Thornthwaite method showed a good agreement with ETo values estimated by observed data from the weather station, with till three days in advance for the dry season. For the wet season, such agreement was observed just for one day in advance. When ETo estimated by Penman-Monteith method with data from the weather forecast model and from weather station were compared any agreement was observed, which was caused by the poor performance of the numeric weather forecast model to predict net radiation and wind speed.
13

Wertorientierte Steuerung multidivisionaler Unternehmen über Residualgewinne /

Bauer, Georg. January 1900 (has links)
Zugleich: Diss. Regensburg, 2008. / Literaturverz.
14

A Validation of an IT Investment Evaluation Model in Health and Social Care : A case study of ERAS Interactive Audit System (EIAS)

Lin, Chen, Ma, Jing January 2012 (has links)
Introduction: The traditional IT investment evaluation methods and/or techniques tend to measure the quantitative value added by eHealth. However, there are contributions brought by innovation which are intangible and sundry, and thus are difficult to identify, measure and manage. A model presented by Vimarlund & Koch (2011) aims for identifying the benefits that IT investments bring to health and social care organizations. It could be used as a tool that identifies and classifies the effects and indicators of IT innovation in-vestments at different organizational levels for different stakeholders. Purpose and research questions: This is an evaluative study with the purpose to validate Vimarlund & Koch’s (2011) evaluation model through practical application. A care study of EIAS (ERAS Interactive Audit System) is conducted. ERAS stands for Enhanced Recovery After Surgery, which is an innovative process aims to enhance patient’s outcome after ma-jor surgery. EIAS is a system that supports the ERAS process. The aim is to achieve a deep understanding of IT investment evaluation. The model will be used in a real case as a guide to evaluate and identify impact that derives from the use and implementation of IT applica-tions. The process of evaluation could also be seen as a process of validation of the model in terms of comprehensiveness, practicality and applicability. Through this study, we aim to find out: 1) What are the possible contributions that EIAS brings to Jönköping County Council? 2) How is the performance of Vimarlund & Koch’s (2011) evaluation model in practical application, in terms of comprehensiveness, practicality and applicability? Method: The purpose of this study is evaluative and it is conducted by using adductive ap-proach. Single case study will be adopted as the research strategy. In this study, qualitative data will be collected through semi-structured interview with key respondents. The data collected will be analyzed qualitatively with a narrative approach. Conclusion: Guided by Vimarlund & Koch’s (2011) evaluation model, the innovations that have been brought into healthcare organizations by EARS are electronic information supply, internal integration of clinical information and possibilities to learn from the system. The model has been validated in terms of comprehensiveness, practicality and applicability. The evaluation model is a generic model to demonstrate the contribution of IT to innovation and change in health care. It could be used in both formative and summative assessment and as well as goal-free and goal-based evaluation. The issue of the productivity paradox has been noticed as some effects are not immediate after introducing of IT. User-participation or not could be considered as an important condition for the validity of the evaluation guided by the evaluation model.
15

Performance downside risk models of the post-modern portfolio theory / VÝKONNOST DOWNSIDE RISK MODELŮ POST-MODERNÍ TEORIE PORTFOLIA

Jablonský, Petr January 2008 (has links)
The thesis provides a comparison of different portfolio models and tests their performance on the financial markets. Our analysis particularly focuses on comparison of the classical Markowitz modern portfolio theory and the downside risk models of the post-modern portfolio theory. In addition, we consider some alternative portfolio models ending with total eleven models that we test. If the performance of different portfolio models should be evaluated and compared correctly, we must use a measure that is unbiased to any portfolio theory. We suggest solving this issue via a new approach based on the utility theory and utility functions. We introduce the unbiased method for evaluation of the portfolio model performance using the expected utility efficient frontier. We use the asymmetric behavioural utility function to capture the behaviour of the real market investors. The Markowitz model is the leading market practice. We investigate whether there are any circumstances in which some other models might provide better performance than the Markowitz model. Our research is for three reasons unique. First, it provides a comprehensive comparison of broad classes of different portfolio models. Second, we focus on the developed markets in United States and Germany but also on the local emerging markets in Czech Republic and Poland. These local markets have never been tested in such extent before. Third, the empirical testing is based on the broad data set from 2003 to 2012 which enable us to test how different portfolio model perform in different macroeconomic conditions.
16

Investigating the influence of data quality on ecological niche models for alien plant invaders

Wolmarans, Rene 08 October 2010 (has links)
Ecological niche modelling is a method designed to describe and predict the geographic distribution of an organism. This procedure aims to quantify the species-environment relationship by describing the association between the organism’s occurrence records and the environmental characteristics at these points. More simply, these models attempt to capture the ecological niche that a particular organism occupies. A popular application of ecological niche models is to predict the potential distribution of invasive alien species in their introduced range. From a biodiversity conservation perspective, a pro-active approach to the management of invasions would be to predict the potential distribution of the species so that areas susceptible to invasion can be identified. The performance of ecological niche models and the accuracy of the potential range predictions depend on the quality of the data that is used to calibrate and evaluate the models. Three different types of input data can be used to calibrate models when producing potential distribution predictions in the introduced range of an invasive alien species. Models can be calibrated with native range occurrence records, introduced range occurrence records or a combination of records from both ranges. However, native range occurrence records might suffer from geographical bias as a result of biased sampling or incomplete sampling. When occurrence records are geographically biased, the underlying environmental gradients in which a species can persist are unlikely to be fully sampled, which could result in an underestimation of the potential distribution of the species in the introduced range. I investigated the impact of geographical bias in native range occurrence records on the performance of ecological niche models for 19 invasive plant species by simulating two geographical bias scenarios (six different treatments) in the native range occurrence records of the species. The geographical bias simulated in this study was sufficient to result in significant environmental bias across treatments, but despite this I did not find a significant effect on model performance. However, this finding was perhaps influenced by the quality of the testing dataset and therefore one should be wary of the possible effects of geographical bias when calibrating models with native range occurrence records or combinations there of. Secondly, models can be calibrated with records obtained from the introduced range of a species. However, when calibrating models with records from the introduced range, uncertainties in terms of the equilibrium status and introduction history could influence data quality and thus model performance. A species that has recently been introduced to a new region is unlikely to be in equilibrium with the environment as insufficient time will have elapsed to allow it to disperse to suitable areas, therefore the occurrence records available would be unlikely to capture its full environmental niche and therefore underestimate the species’ potential distribution. I compared model performance for seven invasive alien plant species with different simulated introduction histories when calibrated with native range records, introduced range records or a combination of records from both ranges. A single introduction, multiple introduction and well established scenario was simulated from the introduced range records available for a species. Model performance was not significantly different when compared between models that were calibrated with datasets representing these three types of input data under a simulated single introduction or multiple introduction scenario, indicating that these datasets probably described enough of the species environmental niche to be able to make accurate predictions. However, model performance was significantly different for models calibrated with introduced range records and a combination of records from both ranges under the well established scenario. Further research is recommended to fully understand the effects of introduction history on the niche of the species. Copyright / Dissertation (MSc)--University of Pretoria, 2009. / Zoology and Entomology / unrestricted
17

Soil Erosion from Forest Haul Roads at Stream Crossings as Influenced by Road Attributes

Lang, Albert Joseph 01 July 2016 (has links)
Forest roads and stream crossings can be important sources of sediment in forested watersheds. The purpose of this research was to compare trapped sediment and forestry best management practice (BMP) effectiveness from haul road stream crossing approaches and ditches. The three studies in this dissertation provide a quantitative assessment of sediment production and potential sediment delivery from forest haul roads in the Virginia Piedmont and Ridge and Valley regions. Sediment production rates were measured and modeled to evaluate and compare road and ditch segments near stream crossings with various ranges of road attributes, BMPs, and management objectives. Sediment mass delivered to traps from 37 haul road stream crossing approaches ranged from <0.1 to 2.7 Mg for the one year collection. Collectively, five approaches accounted for 82% of the total sediment mass trapped. Approaches were categorized into Low, Standard, and High road quality rankings according to road attributes. Seventy-one percent (5 of 7) of Low ranked approaches delivered sediment to traps at rates greater than 11.2 Mg ha-1 yr-1. Nearly 90% of Standard or High road quality approaches generated less than 0.1 Mg of sediment over one year. Among approaches with less than 0.1 Mg of trapped sediment, road gradients ranged from 1% to 13%, bare soil ranged from 2% to 94%, and distances to nearest water control structures ranged from 8.2 to 427.0 m. Such a wide spectrum of road attributes with relatively low levels of trapped sediment indicate that contemporary BMPs can mitigate problematic road attributes and reduce erosion and sediment delivery. Three erosion models, USLE-forest, RUSLE2, and WEPP were compared to trapped sediment data from the 37 forest haul road stream crossing approaches in the first study. The second study assessed model performance from five variations of the three erosion models that have been used in previous forest operations research, USLE-roadway, USLE-soil survey, RUSLE2, WEPP-default, and WEPP-modified. The results suggest that these soil erosion models could estimate erosion and sediment delivery within 5 Mg ha-1 yr-1 for most approaches with erosion rates less than 11.2 Mg ha-1 yr-1, while model estimates varied widely for approaches that eroded above 11.2 Mg ha-1 yr-1. Based on the results from the 12 evaluations of model performance, the modified version of WEPP consistently performed better compared to all other model variations tested. However, results from the study suggest that additional field evaluations and improvement of soil erosion models are needed for stream crossings. The soil erosion models evaluated are not an adequate surrogate for informing policy decisions. The third study evaluated sediment control effectiveness of four commonly recommended ditch BMPs on forest haul road stream crossing approaches. Sixty ditch segments near stream crossings were reconstructed and four ditch BMP treatments were tested. Ditch treatments were bare (Bare), grass seed with lime fertilizer (Seed), grass seed with lime fertilizer and erosion control mat (Mat), rock check dams (Dam), and completely rocked (Rock). Mat treatments had significantly lower erosion rates than Bare and Dam, while Rock and Seed produced intermediate levels. Findings of this study suggest Mat, Seed, and Rock ditch BMPs were effective at reducing erosion, but Mat was most effective directly following construction because Mat provided immediate soil protection measures. Any BMPs that reduce bare soil can provide reduction in erosion and even natural site condition, including litterfall and invasive vegetation can provide erosion control. However, ditch BMPs cannot mitigate inadequate water control structures. Overall, forest roads and stream crossings have the potential to be major contributors of sediment in forested watersheds when roads are not designed well or when BMPs are not properly implemented. Forestry BMPs reduce stormwater runoff velocity and volume from forest roads, but can have varying levels of effectiveness due to site-specific conditions. Operational field studies provide valuable information regarding erosion and sediment delivery rates, which helps guide BMP recommendations and subsequently enhances water quality protection. / Ph. D.
18

A study of the critical success factors for sustainable TQM : a proposed assessment model for maturity and excellence

Nasseef, Mohammed Abdullah January 2009 (has links)
Study of the critical factors for TQM implementation throughout the years, and longitudinal analysis of secondary quality winners of prestigious awards such as the Malcolm Baldrige National Quality Award (MBNQA), is important. The longitudinal analysis in this research will enable verification that there are generic critical factors (CFs) for TQM implementation and generic critical areas of measurement (CAM) that if implemented fully and successfully will deliver excellence. Also, it will enable verification that these generic CFs help to ensure sustainable performance and this could help in answering how excellent organisations sustain their performance constantly. By studying what excellent organisations measure and what they place emphasis on throughout the year, the study will document measurements that have been used to sustain excellence and will consider empirically how these have led to tangible results over a period of twenty years; the study will examine MBNQA winners from 1988 until 2008. Finally, an excellence maturity assessment tool 'assessment software' was developed as result of examining winning case studies over a long period of time, lists of critical factors of implementation (CFI) and critical areas of measurement (CAM) were extracted and used accompanied by the EFQM Excellence Model, and Zairi's two model 'Index of Excellence' and 'Ladder of Excellence'. This formed the basis of the assessment tool developed; companies through this will be able to understand their level of excellence implementation and their position compared to world class organisations.
19

Development of statistical methods for the surveillance and monitoring of adverse events which adjust for differing patient and surgical risks

Webster, Ronald A. January 2008 (has links)
The research in this thesis has been undertaken to develop statistical tools for monitoring adverse events in hospitals that adjust for varying patient risk. The studies involved a detailed literature review of risk adjustment scores for patient mortality following cardiac surgery, comparison of institutional performance, the performance of risk adjusted CUSUM schemes for varying risk profiles of the populations being monitored, the effects of uncertainty in the estimates of expected probabilities of mortality on performance of risk adjusted CUSUM schemes, and the instability of the estimated average run lengths of risk adjusted CUSUM schemes found using the Markov chain approach. The literature review of cardiac surgical risk found that the number of risk factors in a risk model and its discriminating ability were independent, the risk factors could be classified into their "dimensions of risk", and a risk score could not be generalized to populations remote from its developmental database if accurate predictions of patients' probabilities of mortality were required. The conclusions were that an institution could use an "off the shelf" risk score, provided it was recalibrated, or it could construct a customized risk score with risk factors that provide at least one measure for each dimension of risk. The use of report cards to publish adverse outcomes as a tool for quality improvement has been criticized in the medical literature. An analysis of the report cards for cardiac surgery in New York State showed that the institutions' outcome rates appeared overdispersed compared to the model used to construct confidence intervals, and the uncertainty associated with the estimation of institutions' out come rates could be mitigated with trend analysis. A second analysis of the mortality of patients admitted to coronary care units demonstrated the use of notched box plots, fixed and random effect models, and risk adjusted CUSUM schemes as tools to identify outlying hospitals. An important finding from the literature review was that the primary reason for publication of outcomes is to ensure that health care institutions are accountable for the services they provide. A detailed review of the risk adjusted CUSUM scheme was undertaken and the use of average run lengths (ARLs) to assess the scheme, as the risk profile of the population being monitored changes, was justified. The ARLs for in-control and out-of-control processes were found to increase markedly as the average outcome rate of the patient population decreased towards zero. A modification of the risk adjusted CUSUM scheme, where the step size for in-control to out-of-control outcome probabilities were constrained to no less than 0.05, was proposed. The ARLs of this "minimum effect" CUSUM scheme were found to be stable. The previous assessment of the risk adjusted CUSUM scheme assumed that the predicted probability of a patient's mortality is known. A study of its performance, where the estimates of the expected probability of patient mortality were uncertain, showed that uncertainty at the patient level did not affect the performance of the CUSUM schemes, provided that the risk score was well calibrated. Uncertainty in the calibration of the risk model appeared to cause considerable variation in the ARL performance measures. The ARLs of the risk adjusted CUSUM schemes were approximated using simulation because the approximation method using the Markov chain property of CUSUMs, as proposed by Steiner et al. (2000), gave unstable results. The cause of the instability was the method of computing the Markov chain transition probabilities, where probability is concentrated at the midpoint of its Markov state. If probability was assumed to be uniformly distributed over each Markov state, the ARLs were stabilized, provided that the scores for the patients' risk of adverse outcomes were discrete and finite.
20

Predicting post-release software faults in open source software as a menas of measuring intrinsic software product quality / Prédire les défauts Post-Release de logiciels à code ouvert comme méthode pour mesurer la qualité intrinsèque du produit logiciel

Ndenga Malanga, Kennedy 22 November 2017 (has links)
Les logiciels défectueux ont des conséquences coûteuses. Les développeurs de logiciels doivent identifier et réparer les composants défectueux dans leurs logiciels avant de les publier. De même, les utilisateurs doivent évaluer la qualité du logiciel avant son adoption. Cependant, la nature abstraite et les multiples dimensions de la qualité des logiciels entravent les organisations de mesurer leur qualités. Les métriques de qualité logicielle peuvent être utilisées comme proxies de la qualité du logiciel. Cependant, il est nécessaire de disposer d'une métrique de processus logiciel spécifique qui peut garantir des performances de prédiction de défaut meilleures et cohérentes, et cela dans de différents contextes. Cette recherche avait pour objectif de déterminer un prédicteur de défauts logiciels qui présente la meilleure performance de prédiction, nécessite moins d'efforts pour la détection et a un coût minimum de mauvaise classification des composants défectueux. En outre, l'étude inclut une analyse de l'effet de la combinaison de prédicteurs sur la performance d'un modèles de prédiction de défauts logiciels. Les données expérimentales proviennent de quatre projets OSS. La régression logistique et la régression linéaire ont été utilisées pour prédire les défauts. Les métriques Change Burst ont enregistré les valeurs les plus élevées pour les mesures de performance numérique, avaient les probabilités de détection de défaut les plus élevées et le plus faible coût de mauvaise classification des composants. / Faulty software have expensive consequences. To mitigate these consequences, software developers have to identify and fix faulty software components before releasing their products. Similarly, users have to gauge the delivered quality of software before adopting it. However, the abstract nature and multiple dimensions of software quality impede organizations from measuring software quality. Software quality metrics can be used as proxies of software quality. There is need for a software process metric that can guarantee consistent superior fault prediction performances across different contexts. This research sought to determine a predictor for software faults that exhibits the best prediction performance, requires least effort to detect software faults, and has a minimum cost of misclassifying components. It also investigated the effect of combining predictors on performance of software fault prediction models. Experimental data was derived from four OSS projects. Logistic Regression was used to predict bug status while Linear Regression was used to predict number of bugs per file. Models built with Change Burst metrics registered overall better performance relative to those built with Change, Code Churn, Developer Networks and Source Code software metrics. Change Burst metrics recorded the highest values for numerical performance measures, exhibited the highest fault detection probabilities and had the least cost of mis-classification of components. The study found out that Change Burst metrics could effectively predict software faults.

Page generated in 0.4805 seconds