• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 175
  • 74
  • 63
  • 31
  • 14
  • 10
  • 9
  • 9
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 477
  • 477
  • 395
  • 65
  • 65
  • 63
  • 61
  • 56
  • 53
  • 49
  • 43
  • 40
  • 40
  • 39
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Investiční rozhodování ve státní správě a samosprávě. Analýza a doporučení ke zvýšení kvality. / Investment Decision Making of State and Local Governments

Kula, David January 2010 (has links)
The topic of this dissertation thesis is collection, analysis and evaluation of information about investment decisions making within public administration bodies. The investment decision making is examined in relation to allocation of public funds on investment activities and projects. The main goal of this thesis is to analyze and evaluate the current status of the investment decision making within the public sector. Subsequently the goal is to offer new or updated knowledge related to these issues, and bring recommendations to improve the investment decision making of public administration bodies. At first, there are described current knowledge in the field of investment decision making. This is followed by an analysis of assessment methods used for evaluation and selection of investment projects. The work is completed by recommendations for investment decisions making in the form of a normative model. The model should increase benefits of public investment expenses for society. There are used data obtained through a questionnaire survey of 430 subsidized firms, 169 state organizational units and state funds and 130 cities, boroughs and counties. Another important source of information was secondary data of selected ministries and agencies, legislation and literature.
92

Analyses of sustainability goals: Applying statistical models to socio-economic and environmental data

Tindall, Nathaniel W. 07 January 2016 (has links)
This research investigates the environment and development issues of three stakeholders at multiple scales—global, national, regional, and local. Through the analysis of financial, social, and environmental metrics, the potential benefits and risks of each case study are estimated, and their implications are considered. In the first case study, the relationship of manufacturing and environmental performance is investigated. Over 700 facilities of a global manufacturer that produce 11 products on six continents were investigated to understand global variations and determinants of environmental performance. Water, energy, carbon dioxide emissions, and production data from these facilities were analyzed to assess environmental performance; the relationship of production composition at the individual firm and environmental performance were investigated. Location-independent environmental performance metrics were combined to provide both global and local measures of environmental performance. These models were extended to estimate future water use, energy use, and greenhouse gas emissions considering potential demand shifts. Natural resource depletion risks were investigated, and mitigation strategies related to vulnerabilities and exposure were discussed. The case study demonstrated how data from multiple facilities can be used to characterize the variability amongst facilities and to preview how changes in production may affect overall corporate environmental metrics. The developed framework adds a new approach to account for environmental performance and degradation as well as assess potential risk in locations where climate change may affect the availability of production resources (i.e., water and energy) and thus, is a tool for understanding risk and maintaining competitive advantage. The second case study was designed to address the issue of delivering affordable and sustainable energy. Energy pricing was evaluated by modeling individual energy consumption behaviors. This analysis simulated a heterogeneous set of residential households in both the urban and rural environments in order to understand demand shifts in the residential energy end-use sector due to the effects of electricity pricing. An agent-based model (ABM) was created to investigate the interactions of energy policy and individual household behaviors; the model incorporated empirical data on beliefs and perceptions of energy. The environmental beliefs, energy pricing grievances, and social networking dynamics were integrated into the ABM model structure. This model projected the aggregate residential sector electricity demand throughout the 30-year time period as well as distinguished the respective number of households who only use electricity, that use solely rely on indigenous fuels, and that incorporate both indigenous fuels and electricity. The model is one of the first characterizations of household electricity demand response and fuel transitions related to energy pricing at the individual household level, and is one of the first approaches to evaluating consumer grievance and rioting response to energy service delivery. The model framework is suggested as an innovative tool for energy policy analysis and can easily be revised to assist policy makers in other developing countries. In the final case study, a framework was developed for a broad cost-benefit and greenhouse gas evaluation of transit systems and their associated developments. A case study was developed of the Atlanta BeltLine. The net greenhouse gas emissions from the BeltLine light rail system will depend on the energy efficiency of the streetcars themselves, the greenhouse gas emissions from the electricity used to power the streetcars, the extent to which people use the BeltLine instead of driving personal vehicles, and the efficiency of their vehicles. The effects of ridership, residential densities, and housing mix on environmental performance were investigated and were used to estimate the overall system efficacy. The range of the net present value of this system was estimated considering health, congestion, per capita greenhouse gas emissions, and societal costs and benefits on a time-varying scale as well as considering the construction and operational costs. The 95% confidence interval was found with a range bounded by a potential loss of $860 million and a benefit of $2.3 billion; the mean net present value was $610 million. It is estimated that the system will generate a savings of $220 per ton of emitted CO2 with a 95% confidence interval bounded by a potential social cost of $86 cost per ton CO2 and a savings of $595 per ton CO2.
93

Economics of fire : exploring fire incident data for a design tool methodology

Salter, Chris January 2013 (has links)
Fires within the built environment are a fact of life and through design and the application of the building regulations and design codes, the risk of fire to the building occupants can be minimised. However, the building regulations within the UK do not deal with property protection and focus solely on the safety of the building occupants. This research details the statistical analysis of the UK Fire and Rescue Service and the Fire Protection Association's fire incident databases to create a loss model framework, allowing the designers of a buildings fire safety systems to conduct a cost benefit analysis on installing additional fire protection solely for property protection. It finds that statistical analysis of the FDR 1 incident database highlights the data collection methods of the Fire and Rescue Service ideally need to be changed to allow further risk analysis on the UK building stock, that the statistics highlight that the incidents affecting the size of a fire are the time from ignition to discovery and the presence of dangerous materials, that sprinkler activations may not be as high as made out by sprinkler groups and that the activation of an alarm system gives a smaller size fire. The original contribution to knowledge that this PhD makes is to analyse the FDR 1 database to try and create a loss model, using data from both the Fire Protection Association and the Fire and Rescue Service.
94

Assessing sheep’s wool as a filtration material for the removal of formaldehyde in the indoor environment

Wang, Jennifer, active 21st century 11 September 2014 (has links)
Formaldehyde is one of the most prevalent and toxic chemicals found indoors, where we spend ~90% of our lives. Chronic exposure to formaldehyde indoors, therefore, is of particular concern, especially for sensitive populations like children and infants. Unfortunately, no effective filtration control strategy exists for its removal. While research has shown that proteins in sheep's wool bind permanently to formaldehyde, the extent of wool's formaldehyde removal efficiency and effective removal capacity when applied in active filtration settings is unknown. In this research, wool capacity experiments were designed using a plug flow reactor and air cleaner unit to explore the capacity of wool to remove formaldehyde given different active filtration designs. Using the measured wool capacity, filter life and annual costs were modeled in a typical 50 m₃ room for a variety of theoretical filter operation lengths, air exchange rates, and source concentrations. For each case, annual filtration costs were compared to the monetary benefits derived from wool resale and from the reduction in cancer rates for different population types using the DALYs human exposure metric. Wool filtration was observed to drop formaldehyde concentrations between 60-80%, although the effective wool removal capacity was highly dependent on the fluid mechanics of the filtration unit. The air cleaner setup yielded approximately six times greater capacity than the small-scale PFR designed to mimic active filtration (670 [mu]g versus 110 [mu]g HCHO removed per g of wool, respectively). The outcomes of these experiments suggest that kinematic variations resulting from different wool packing densities, air flow rates, and degree of mixing in the units influence the filtration efficiency and effective capacity of wool. The results of the cost--benefit analysis show that for the higher wool capacity conditions, cost-effectiveness is achieved by the majority of room cases when sensitive populations like children and infants are present. However, for the average population scenarios, filtration was rarely worthwhile, showing that adults benefit less from reductions in chronic formaldehyde exposure. These results suggest that implementation of active filtration would be the most beneficial and cost-effective in settings like schools, nurseries, and hospitals that have a high percentage of sensitive populations. / text
95

Valuing environmental benefits using the contingent valuation method : an econometric analysis

Kriström, Bengt January 1990 (has links)
The purpose of this study is to investigate methods for assessing the value people place on preserving our natural environments and resources. It focuses on the contingent valuation method, which is a method for directly asking people about their preferences. In particular, the study focuses on the use of discrete response data in contingent valuation experiments.The first part of the study explores the economic theory of the total value of a natural resource, where the principal components of total value are analyzed; use values and non-use values. Our application is a study of the value Swedes' attach to the preservation of eleven forest areas that contain high recreational values and contain unique environmental qualities. Six forests were selected on the basis of an official investigation which includes virgin forests and other areas with unique environmental qualities. In addition, five virgin forests were selected.Two types of valuation questions are analyzed, the continuous and the discrete. The first type of question asks directly about willingness to pay, while the second type suggests a price that the respondent may reject or accept. The results of the continuous question suggest an average willingness to pay of about 1,000 SEK per household for preservation of the areas. Further analysis of the data suggests that this value depends on severi characteristics of the respondent: such as the respondent's income and whether or not the respondent is an altruist.Two econometric approaches are used to analyze the discrete responses; a flexible parametric approach and a non-parametric approach. In addition, a Bayesian approach is described. It is shown that the results of a contingent valuation experiment may depend to some extent on the choice of the probability model. A re-sampling approach and a Monte-Carlo approach is used to shed light on the design of a contingent valuation experiment with discrete responses. The econometric analysis ends with an analysis of the often observed disparity between discrete and continuous valuation questions.A cost-benefit analysis is performed in the final chapter. The purpose of this analysis is to illustrate how the contingent valuation approach may be combined with opportunity cost data to improve the decision-basis in the environmental policy domain. This analysis does not give strong support for a cutting alternative. Finally, the results of this investigation are compared with evidence from other studies.The main conclusion of this study is that assessment of peoples' sentiments towards changes of our natural environments and resources can be a useful supplement to decisions about the proper husbandry of our natural environments and resources. It also highlights the importance of careful statistical analysis of data gained from contingent valuation experiments. / digitalisering@umu
96

A Bayesian cost-benefit approach to sample size determination and evaluation in clinical trials

Kikuchi, Takashi January 2011 (has links)
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the costs of the trial and the possible benefits by using a new medical treatment, and fail to consider the important fact that the number of users depends on evidence for improvement compared with the current treatment. A novel Bayesian approach, Behavioral Bayes (or BeBay for short) (Gittins and Pezeshk, 2000a,b, 2002a,b; Pezeshk, 2003), assumes that the number of patients switching to the new treatment depends on the strength of the evidence which is provided by clinical trials, and takes a value between zero and the number of potential patients in the country. The better a new treatment, the more patients switch to it and the more the resulting benefit. The model defines the optimal sample size to be the sample size that maximises the expected net benefit resulting from a clinical trial. Gittins and Pezeshk use a simple form of benefit function for paired comparisons between two medical treatments and assume that the variance of the efficacy is known. The research in this thesis generalises these original conditions by introducing a logistic benefit function to take account of differences in efficacy and safety between two drugs. The model is also extended to the more general cases of unpaired comparisons and unknown variance. The expected net benefit defined by Gittins and Pezeshk is based on the efficacy of the new drug only. It does not consider the incidence of adverse reactions and their effect on patients’ preferences. Here we include the costs of treating adverse reactions and calculate the total benefit in terms of how much the new drug can reduce societal expenditure. We describe how our model may be used for the design of phase III clinical trials, cluster randomised clinical trials and bridging studies. This is done in some detail and using illustrative examples based on published studies. For phase III trials we allow the possibility of unequal treatment group sizes, which often occur in practice. Bridging studies are those carried out to extend the range of applicability of an established drug, for example to new ethnic groups. Throughout the objective of our procedures is to optimise the costbenefit in terms of national health-care. BeBay is the leading methodology for determining sample sizes on this basis. It explicitly takes account of the roles of three decision makers, namely patients and doctors, pharmaceutical companies and the health authority.
97

Investigation of regulatory efficiency with reference to the EU Water Framework Directive : an application to Scottish agriculture

Lago Aresti, Manuel January 2009 (has links)
The Water Framework Directive (WFD) has the stated objective of delivering good status (GS) for Europe’s surface waters and groundwaters. But meeting GS is cost dependent, and in some water bodies pollution abatement costs may be high or judged as disproportionate. The definition and assessment of disproportionate costs is central for the justification of time-frame derogations and/or lowering the environmental objectives (standards) for compliance at a water body. European official guidance is discretionary about the interpretation of disproportionate costs which consequently can be interpreted and applied differently across Member States. The aim of this research is to clarify the definition of disproportionality and to convey a consistent interpretation that is fully compliant with the economic requirements of the Directive, whilst also being mindful of the principles of pollution control and welfare economics theory. On this basis, standard-setting derogations should aim to reach socially optimal decisions and be judged with reference to a combination of explicit cost and benefit curves – an application of Cost-Benefits Analysis - and financial affordability tests. Arguably, these tools should be more influential in the development of derogation decisions across member states, including Scotland. The WFD is expected to have extensive effects on Scottish agriculture, which is faced with the challenge of maintaining its competitiveness, while protecting water resources. Focusing the analysis on the socio-economic impacts of achieving water diffuse pollution targets for the sector, a series of independent tests for the assessment of disproportionate costs are proposed and evaluated. These are: i) development of abatement cost curves for agricultural Phosphorus (P) mitigation options for different farm systems; ii) a financial characterisation of farming in Scotland and impact on profits of achieving different P loads reductions at farm level are investigated in order to explore issues on "affordability" and "ability to pay" by the sector; and iii) an investigation of benefits assessment using discrete choice modelling to explore public preferences for pollution control and measure non-market benefits of WFD water quality improvements in Scotland. Results from these tests provide benchmarks for the definition of disproportionate costs and are relevant to other aspects of the economic analysis of water use in Scotland. This study helps to clarify the nature of agricultural water use and how it leads to social tradeoffs with other non agricultural users. Ultimately, this perspective adds to the debate of how and where water is best employed to maximize its value to society.
98

The Valuation of River Ecosystem Services

Jiang, Wei 09 November 2016 (has links)
No description available.
99

Risk assessment of natural hazards : Data availability and applicability for loss quantification

Grahn, Tonje January 2017 (has links)
Quantitative risk assessments are a fundamental part of economic analysis and natural hazard risk management models. It increases the objectivity and the transparency of risk assessments and guides policymakers in making efficient decisions when spending public resources on risk reduction. Managing hazard risks calls for an understanding of the relationships between hazard exposure and vulnerability of humans and assets.   The purpose of this thesis is to identify and estimate causal relationships between hazards, exposure and vulnerability, and to evaluate the applicability of systematically collected data sets to produce reliable and generalizable quantitative information for decision support.   Several causal relationships have been established. For example, the extent of lake flood damage to residential buildings depends on the duration of floods, distance to waterfront, the age of the house and in some cases the water level. Results also show that homeowners private initiative to reduce risk, prior to or during a flood, reduced their probability of suffering building damage with as much as 40 percent. Further, a causal relationship has been established between the number of people exposed to quick clay landslides and landslide fatalities.   Even though several relationships were identified between flood exposure and vulnerability, the effects can only explain small parts of the total variation in damages, especially at object level. The availability of damage data in Sweden is generally low. The most comprehensive damage data sets in Sweden are held by private insurance companies and are not publicly available. Data scarcity is a barrier to quantitative natural hazard risk assessment in Sweden. More efforts should therefore be made to collect data systematically for modelling and validating standardized approaches to quantitative damage estimation. / Natural hazard damages have increased worldwide. Impacts caused by hydrological and meteorological hazards have increased the most. An analysis of insurance payments in Sweden showed that flood damages have been increasing in Sweden as well. With climate change and increasing populations we can expect this trend to continue unless efforts are made to reduce risk and adapt communities to the threats. Economic analysis and quantitative risk assessments of natural hazards are fundamental parts of a risk management process that can support policymakers' decisions on efficient risk reduction. However, in order to develop reliable damage estimation models knowledge is needed of the relationships between hazard exposure and the vulnerability of exposed objects and persons. This thesis has established causal relationships between residential exposure and flood damage on the basis of insurance data. I also found that private damage-reducing actions decreased the probability of damage to buildings with almost 40 percent. Further, a causal relationship has been established between the number of people exposed to quick clay landslides and fatalities. Even though several relationships have been identified between flood exposure and vulnerability, the effects can explain only small parts of the total variation in damages, especially at object level, and more effort is needed to develop quantitative models for risk assessment purposes.
100

Analyse de sensibilité de modèles spatialisés : application à l'analyse coût-bénéfice de projets de prévention du risque d'inondation / Variance-based sensitivity analysis for spatially distributed models : application to cost-benefit analysis of flood risk management plansSpatially distributed model; Sensitivity analysis; Uncertainty; Scale; Geostatistics;CBA; Flood; Damage.

Saint-Geours, Nathalie 29 November 2012 (has links)
L'analyse de sensibilité globale basée sur la variance permet de hiérarchiser les sources d'incertitude présentes dans un modèle numérique et d'identifier celles qui contribuent le plus à la variabilité de la sortie du modèle. Ce type d'analyse peine à se développer dans les sciences de la Terre et de l'Environnement, en partie à cause de la dimension spatiale de nombreux modèles numériques, dont les variables d'entrée et/ou de sortie peuvent être des données distribuées dans l'espace. Le travail de thèse réalisé a pour ambition de montrer comment l'analyse de sensibilité globale peut être adaptée pour tenir compte des spécificités de ces modèles numériques spatialisés, notamment la dépendance spatiale dans les données d'entrée et les questions liées au changement d'échelle spatiale. Ce travail s'appuie sur une étude de cas approfondie du code NOE, qui est un modèle numérique spatialisé d'analyse coût-bénéfice de projets de prévention du risque d'inondation. On s'intéresse dans un premier temps à l'estimation d'indices de sensibilité associés à des variables d'entrée spatialisées. L'approche retenue du « map labelling » permet de rendre compte de l'auto-corrélation spatiale de ces variables et d'étudier son impact sur la sortie du modèle. On explore ensuite le lien entre la notion d'« échelle » et l'analyse de sensibilité de modèles spatialisés. On propose de définir les indices de sensibilité « zonaux » et « ponctuels » pour mettre en évidence l'impact du support spatial de la sortie d'un modèle sur la hiérarchisation des sources d'incertitude. On établit ensuite, sous certaines conditions, des propriétés formelles de ces indices de sensibilité. Ces résultats montrent notamment que l'indice de sensibilité zonal d'une variable d'entrée spatialisée diminue à mesure que s'agrandit le support spatial sur lequel est agrégée la sortie du modèle. L'application au modèle NOE des méthodologies développées se révèle riche en enseignements pour une meilleure prise en compte des incertitudes dans les modèles d'analyse coût-bénéfice des projets de prévention du risque d'inondation. / Variance-based global sensitivity analysis is used to study how the variability of the output of a numerical model can be apportioned to different sources of uncertainty in its inputs. It is an essential component of model building as it helps to identify model inputs that account for most of the model output variance. However, this approach is seldom applied in Earth and Environmental Sciences, partly because most of the numerical models developed in this field include spatially distributed inputs or outputs . Our research work aims to show how global sensitivity analysis can be adapted to such spatial models, and more precisely how to cope with the following two issues: i) the presence of spatial auto-correlation in the model inputs, and ii) the scaling issues. We base our research on the detailed study of the numerical code NOE, which is a spatial model for cost-benefit analysis of flood risk management plans. We first investigate how variance-based sensitivity indices can be computed for spatially distributed model inputs. We focus on the “map labelling” approach, which allows to handle any complex spatial structure of uncertainty in the modelinputs and to assess its effect on the model output. Next, we offer to explore how scaling issues interact with the sensitivity analysis of a spatial model. We define “block sensitivity indices” and “site sensitivity indices” to account for the role of the spatial support of model output. We establish the properties of these sensitivity indices under some specific conditions. In particular, we show that the relative contribution of an uncertain spatially distributed model input to the variance of the model output increases with its correlation length and decreases with the size of the spatial support considered for model output aggregation. By applying our results to the NOE modelling chain, we also draw a number of lessons to better deal with uncertainties in flood damage modelling and cost-benefit analysis of flood riskmanagement plans.

Page generated in 0.0602 seconds